00:00:00.001 Started by upstream project "autotest-per-patch" build number 122881 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.055 using credential 00000000-0000-0000-0000-000000000002 00:00:00.057 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.093 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.199 > git --version # 'git version 2.39.2' 00:00:00.199 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.986 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.999 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.011 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.011 > git config core.sparsecheckout # timeout=10 00:00:04.024 > git read-tree -mu HEAD # timeout=10 00:00:04.041 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.058 Commit message: "inventory/dev: add missing long names" 00:00:04.058 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:04.152 [Pipeline] Start of Pipeline 00:00:04.167 [Pipeline] library 00:00:04.168 Loading library shm_lib@master 00:00:04.168 Library shm_lib@master is cached. Copying from home. 00:00:04.184 [Pipeline] node 00:00:19.186 Still waiting to schedule task 00:00:19.186 Waiting for next available executor on ‘vagrant-vm-host’ 00:08:25.110 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:25.112 [Pipeline] { 00:08:25.123 [Pipeline] catchError 00:08:25.125 [Pipeline] { 00:08:25.138 [Pipeline] wrap 00:08:25.149 [Pipeline] { 00:08:25.159 [Pipeline] stage 00:08:25.161 [Pipeline] { (Prologue) 00:08:25.179 [Pipeline] echo 00:08:25.180 Node: VM-host-SM4 00:08:25.186 [Pipeline] cleanWs 00:08:25.194 [WS-CLEANUP] Deleting project workspace... 00:08:25.194 [WS-CLEANUP] Deferred wipeout is used... 00:08:25.200 [WS-CLEANUP] done 00:08:25.354 [Pipeline] setCustomBuildProperty 00:08:25.419 [Pipeline] nodesByLabel 00:08:25.421 Found a total of 1 nodes with the 'sorcerer' label 00:08:25.430 [Pipeline] httpRequest 00:08:25.436 HttpMethod: GET 00:08:25.436 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:08:25.443 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:08:25.445 Response Code: HTTP/1.1 200 OK 00:08:25.447 Success: Status code 200 is in the accepted range: 200,404 00:08:25.447 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:08:25.593 [Pipeline] sh 00:08:25.880 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:08:25.897 [Pipeline] httpRequest 00:08:25.901 HttpMethod: GET 00:08:25.902 URL: http://10.211.164.101/packages/spdk_9526734a37c67dd1ff7cfb421cc5ef7ab5517af0.tar.gz 00:08:25.902 Sending request to url: http://10.211.164.101/packages/spdk_9526734a37c67dd1ff7cfb421cc5ef7ab5517af0.tar.gz 00:08:25.903 Response Code: HTTP/1.1 200 OK 00:08:25.903 Success: Status code 200 is in the accepted range: 200,404 00:08:25.904 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_9526734a37c67dd1ff7cfb421cc5ef7ab5517af0.tar.gz 00:08:28.064 [Pipeline] sh 00:08:28.364 + tar --no-same-owner -xf spdk_9526734a37c67dd1ff7cfb421cc5ef7ab5517af0.tar.gz 00:08:31.656 [Pipeline] sh 00:08:31.933 + git -C spdk log --oneline -n5 00:08:31.933 9526734a3 vbdev_lvol: add lvol set parent rpc interface 00:08:31.933 e1ade818b vbdev_lvol: add lvol set external parent 00:08:31.933 95a28e501 lvol: add lvol set external parent 00:08:31.933 3216253e6 lvol: add lvol set parent 00:08:31.933 567565736 blob: add blob set external parent 00:08:31.952 [Pipeline] writeFile 00:08:31.967 [Pipeline] sh 00:08:32.245 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:08:32.255 [Pipeline] sh 00:08:32.532 + cat autorun-spdk.conf 00:08:32.532 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:32.532 SPDK_TEST_NVMF=1 00:08:32.532 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:32.532 SPDK_TEST_URING=1 00:08:32.532 SPDK_TEST_USDT=1 00:08:32.532 SPDK_RUN_UBSAN=1 00:08:32.532 NET_TYPE=virt 00:08:32.532 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:32.538 RUN_NIGHTLY=0 00:08:32.540 [Pipeline] } 00:08:32.555 [Pipeline] // stage 00:08:32.568 [Pipeline] stage 00:08:32.571 [Pipeline] { (Run VM) 00:08:32.585 [Pipeline] sh 00:08:32.861 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:08:32.861 + echo 'Start stage prepare_nvme.sh' 00:08:32.861 Start stage prepare_nvme.sh 00:08:32.861 + [[ -n 10 ]] 00:08:32.861 + disk_prefix=ex10 00:08:32.861 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:08:32.861 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:08:32.861 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:08:32.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:32.861 ++ SPDK_TEST_NVMF=1 00:08:32.861 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:32.861 ++ SPDK_TEST_URING=1 00:08:32.861 ++ SPDK_TEST_USDT=1 00:08:32.861 ++ SPDK_RUN_UBSAN=1 00:08:32.861 ++ NET_TYPE=virt 00:08:32.861 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:32.861 ++ RUN_NIGHTLY=0 00:08:32.861 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:32.861 + nvme_files=() 00:08:32.861 + declare -A nvme_files 00:08:32.861 + backend_dir=/var/lib/libvirt/images/backends 00:08:32.861 + nvme_files['nvme.img']=5G 00:08:32.861 + nvme_files['nvme-cmb.img']=5G 00:08:32.861 + nvme_files['nvme-multi0.img']=4G 00:08:32.861 + nvme_files['nvme-multi1.img']=4G 00:08:32.861 + nvme_files['nvme-multi2.img']=4G 00:08:32.861 + nvme_files['nvme-openstack.img']=8G 00:08:32.861 + nvme_files['nvme-zns.img']=5G 00:08:32.861 + (( SPDK_TEST_NVME_PMR == 1 )) 00:08:32.861 + (( SPDK_TEST_FTL == 1 )) 00:08:32.861 + (( SPDK_TEST_NVME_FDP == 1 )) 00:08:32.861 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:08:32.861 + for nvme in "${!nvme_files[@]}" 00:08:32.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:08:32.861 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:08:32.861 + for nvme in "${!nvme_files[@]}" 00:08:32.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:08:32.861 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:08:32.861 + for nvme in "${!nvme_files[@]}" 00:08:32.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:08:32.861 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:08:32.861 + for nvme in "${!nvme_files[@]}" 00:08:32.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:08:32.861 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:08:32.861 + for nvme in "${!nvme_files[@]}" 00:08:32.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:08:33.132 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:08:33.132 + for nvme in "${!nvme_files[@]}" 00:08:33.132 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:08:33.132 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:08:33.132 + for nvme in "${!nvme_files[@]}" 00:08:33.132 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:08:33.132 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:08:33.132 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:08:33.139 + echo 'End stage prepare_nvme.sh' 00:08:33.139 End stage prepare_nvme.sh 00:08:33.144 [Pipeline] sh 00:08:33.413 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:08:33.414 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -H -a -v -f fedora38 00:08:33.414 00:08:33.414 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:08:33.414 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:08:33.414 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:33.414 HELP=0 00:08:33.414 DRY_RUN=0 00:08:33.414 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img, 00:08:33.414 NVME_DISKS_TYPE=nvme,nvme, 00:08:33.414 NVME_AUTO_CREATE=0 00:08:33.414 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img, 00:08:33.414 NVME_CMB=,, 00:08:33.414 NVME_PMR=,, 00:08:33.414 NVME_ZNS=,, 00:08:33.414 NVME_MS=,, 00:08:33.414 NVME_FDP=,, 00:08:33.414 SPDK_VAGRANT_DISTRO=fedora38 00:08:33.414 SPDK_VAGRANT_VMCPU=10 00:08:33.414 SPDK_VAGRANT_VMRAM=12288 00:08:33.414 SPDK_VAGRANT_PROVIDER=libvirt 00:08:33.414 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:08:33.414 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:08:33.414 SPDK_OPENSTACK_NETWORK=0 00:08:33.414 VAGRANT_PACKAGE_BOX=0 00:08:33.414 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:08:33.414 FORCE_DISTRO=true 00:08:33.414 VAGRANT_BOX_VERSION= 00:08:33.414 EXTRA_VAGRANTFILES= 00:08:33.414 NIC_MODEL=e1000 00:08:33.414 00:08:33.414 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:08:33.414 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:36.771 Bringing machine 'default' up with 'libvirt' provider... 00:08:37.337 ==> default: Creating image (snapshot of base box volume). 00:08:37.594 ==> default: Creating domain with the following settings... 00:08:37.594 ==> default: -- Name: fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1715763709_59d8359609b2e5a3db43 00:08:37.594 ==> default: -- Domain type: kvm 00:08:37.594 ==> default: -- Cpus: 10 00:08:37.594 ==> default: -- Feature: acpi 00:08:37.594 ==> default: -- Feature: apic 00:08:37.594 ==> default: -- Feature: pae 00:08:37.594 ==> default: -- Memory: 12288M 00:08:37.594 ==> default: -- Memory Backing: hugepages: 00:08:37.594 ==> default: -- Management MAC: 00:08:37.594 ==> default: -- Loader: 00:08:37.594 ==> default: -- Nvram: 00:08:37.594 ==> default: -- Base box: spdk/fedora38 00:08:37.594 ==> default: -- Storage pool: default 00:08:37.594 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1715763709_59d8359609b2e5a3db43.img (20G) 00:08:37.594 ==> default: -- Volume Cache: default 00:08:37.594 ==> default: -- Kernel: 00:08:37.594 ==> default: -- Initrd: 00:08:37.594 ==> default: -- Graphics Type: vnc 00:08:37.594 ==> default: -- Graphics Port: -1 00:08:37.594 ==> default: -- Graphics IP: 127.0.0.1 00:08:37.594 ==> default: -- Graphics Password: Not defined 00:08:37.594 ==> default: -- Video Type: cirrus 00:08:37.594 ==> default: -- Video VRAM: 9216 00:08:37.594 ==> default: -- Sound Type: 00:08:37.594 ==> default: -- Keymap: en-us 00:08:37.594 ==> default: -- TPM Path: 00:08:37.595 ==> default: -- INPUT: type=mouse, bus=ps2 00:08:37.595 ==> default: -- Command line args: 00:08:37.595 ==> default: -> value=-device, 00:08:37.595 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:08:37.595 ==> default: -> value=-drive, 00:08:37.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-0-drive0, 00:08:37.595 ==> default: -> value=-device, 00:08:37.595 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:37.595 ==> default: -> value=-device, 00:08:37.595 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:08:37.595 ==> default: -> value=-drive, 00:08:37.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:08:37.595 ==> default: -> value=-device, 00:08:37.595 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:37.595 ==> default: -> value=-drive, 00:08:37.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:08:37.595 ==> default: -> value=-device, 00:08:37.595 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:37.595 ==> default: -> value=-drive, 00:08:37.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:08:37.595 ==> default: -> value=-device, 00:08:37.595 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:37.852 ==> default: Creating shared folders metadata... 00:08:37.852 ==> default: Starting domain. 00:08:39.748 ==> default: Waiting for domain to get an IP address... 00:09:18.485 ==> default: Waiting for SSH to become available... 00:09:18.485 ==> default: Configuring and enabling network interfaces... 00:09:19.419 default: SSH address: 192.168.121.62:22 00:09:19.419 default: SSH username: vagrant 00:09:19.419 default: SSH auth method: private key 00:09:21.947 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:09:32.031 ==> default: Mounting SSHFS shared folder... 00:09:32.290 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:09:32.290 ==> default: Checking Mount.. 00:09:33.675 ==> default: Folder Successfully Mounted! 00:09:33.675 ==> default: Running provisioner: file... 00:09:34.609 default: ~/.gitconfig => .gitconfig 00:09:35.176 00:09:35.176 SUCCESS! 00:09:35.176 00:09:35.176 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:09:35.176 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:09:35.176 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:09:35.176 00:09:35.186 [Pipeline] } 00:09:35.200 [Pipeline] // stage 00:09:35.210 [Pipeline] dir 00:09:35.211 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:09:35.213 [Pipeline] { 00:09:35.227 [Pipeline] catchError 00:09:35.229 [Pipeline] { 00:09:35.244 [Pipeline] sh 00:09:35.624 + vagrant ssh-config --host vagrant 00:09:35.624 + + tee ssh_conf 00:09:35.624 sed -ne /^Host/,$p 00:09:39.807 Host vagrant 00:09:39.807 HostName 192.168.121.62 00:09:39.807 User vagrant 00:09:39.807 Port 22 00:09:39.807 UserKnownHostsFile /dev/null 00:09:39.807 StrictHostKeyChecking no 00:09:39.807 PasswordAuthentication no 00:09:39.807 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1701806725-069-updated-1701632595-patched-kernel/libvirt/fedora38 00:09:39.807 IdentitiesOnly yes 00:09:39.807 LogLevel FATAL 00:09:39.807 ForwardAgent yes 00:09:39.807 ForwardX11 yes 00:09:39.807 00:09:39.821 [Pipeline] withEnv 00:09:39.823 [Pipeline] { 00:09:39.839 [Pipeline] sh 00:09:40.118 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:09:40.118 source /etc/os-release 00:09:40.118 [[ -e /image.version ]] && img=$(< /image.version) 00:09:40.118 # Minimal, systemd-like check. 00:09:40.118 if [[ -e /.dockerenv ]]; then 00:09:40.118 # Clear garbage from the node's name: 00:09:40.118 # agt-er_autotest_547-896 -> autotest_547-896 00:09:40.118 # $HOSTNAME is the actual container id 00:09:40.118 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:09:40.118 if mountpoint -q /etc/hostname; then 00:09:40.118 # We can assume this is a mount from a host where container is running, 00:09:40.118 # so fetch its hostname to easily identify the target swarm worker. 00:09:40.118 container="$(< /etc/hostname) ($agent)" 00:09:40.118 else 00:09:40.118 # Fallback 00:09:40.118 container=$agent 00:09:40.118 fi 00:09:40.118 fi 00:09:40.118 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:09:40.118 00:09:40.388 [Pipeline] } 00:09:40.408 [Pipeline] // withEnv 00:09:40.417 [Pipeline] setCustomBuildProperty 00:09:40.432 [Pipeline] stage 00:09:40.434 [Pipeline] { (Tests) 00:09:40.452 [Pipeline] sh 00:09:40.731 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:09:41.001 [Pipeline] timeout 00:09:41.001 Timeout set to expire in 40 min 00:09:41.003 [Pipeline] { 00:09:41.018 [Pipeline] sh 00:09:41.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:09:41.861 HEAD is now at 9526734a3 vbdev_lvol: add lvol set parent rpc interface 00:09:41.876 [Pipeline] sh 00:09:42.212 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:09:42.485 [Pipeline] sh 00:09:42.763 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:09:43.037 [Pipeline] sh 00:09:43.318 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:09:43.577 ++ readlink -f spdk_repo 00:09:43.577 + DIR_ROOT=/home/vagrant/spdk_repo 00:09:43.577 + [[ -n /home/vagrant/spdk_repo ]] 00:09:43.577 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:09:43.577 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:09:43.577 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:09:43.577 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:09:43.577 + [[ -d /home/vagrant/spdk_repo/output ]] 00:09:43.577 + cd /home/vagrant/spdk_repo 00:09:43.577 + source /etc/os-release 00:09:43.577 ++ NAME='Fedora Linux' 00:09:43.577 ++ VERSION='38 (Cloud Edition)' 00:09:43.577 ++ ID=fedora 00:09:43.577 ++ VERSION_ID=38 00:09:43.577 ++ VERSION_CODENAME= 00:09:43.577 ++ PLATFORM_ID=platform:f38 00:09:43.577 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:09:43.577 ++ ANSI_COLOR='0;38;2;60;110;180' 00:09:43.577 ++ LOGO=fedora-logo-icon 00:09:43.577 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:09:43.577 ++ HOME_URL=https://fedoraproject.org/ 00:09:43.577 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:09:43.577 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:09:43.577 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:09:43.577 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:09:43.577 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:09:43.577 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:09:43.577 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:09:43.577 ++ SUPPORT_END=2024-05-14 00:09:43.577 ++ VARIANT='Cloud Edition' 00:09:43.577 ++ VARIANT_ID=cloud 00:09:43.577 + uname -a 00:09:43.577 Linux fedora38-cloud-1701806725-069-updated-1701632595 6.5.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 20:08:38 UTC 2023 x86_64 GNU/Linux 00:09:43.577 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:44.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.143 Hugepages 00:09:44.143 node hugesize free / total 00:09:44.143 node0 1048576kB 0 / 0 00:09:44.143 node0 2048kB 0 / 0 00:09:44.143 00:09:44.143 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:44.143 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:44.143 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:44.143 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:44.143 + rm -f /tmp/spdk-ld-path 00:09:44.143 + source autorun-spdk.conf 00:09:44.143 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:44.143 ++ SPDK_TEST_NVMF=1 00:09:44.143 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:44.143 ++ SPDK_TEST_URING=1 00:09:44.143 ++ SPDK_TEST_USDT=1 00:09:44.143 ++ SPDK_RUN_UBSAN=1 00:09:44.143 ++ NET_TYPE=virt 00:09:44.143 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:44.143 ++ RUN_NIGHTLY=0 00:09:44.143 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:09:44.143 + [[ -n '' ]] 00:09:44.143 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:09:44.143 + for M in /var/spdk/build-*-manifest.txt 00:09:44.143 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:09:44.143 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:44.143 + for M in /var/spdk/build-*-manifest.txt 00:09:44.143 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:09:44.143 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:44.143 + for M in /var/spdk/build-*-manifest.txt 00:09:44.143 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:09:44.143 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:44.143 ++ uname 00:09:44.143 + [[ Linux == \L\i\n\u\x ]] 00:09:44.143 + sudo dmesg -T 00:09:44.401 + sudo dmesg --clear 00:09:44.401 + dmesg_pid=5031 00:09:44.401 + [[ Fedora Linux == FreeBSD ]] 00:09:44.401 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.401 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.401 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:44.401 + [[ -x /usr/src/fio-static/fio ]] 00:09:44.401 + sudo dmesg -Tw 00:09:44.401 + export FIO_BIN=/usr/src/fio-static/fio 00:09:44.401 + FIO_BIN=/usr/src/fio-static/fio 00:09:44.401 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:09:44.401 + [[ ! -v VFIO_QEMU_BIN ]] 00:09:44.401 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:09:44.401 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.401 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.401 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:09:44.401 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.401 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.401 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:44.401 Test configuration: 00:09:44.401 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:44.401 SPDK_TEST_NVMF=1 00:09:44.401 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:44.401 SPDK_TEST_URING=1 00:09:44.401 SPDK_TEST_USDT=1 00:09:44.401 SPDK_RUN_UBSAN=1 00:09:44.401 NET_TYPE=virt 00:09:44.401 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:44.401 RUN_NIGHTLY=0 09:02:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.401 09:02:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:44.401 09:02:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.401 09:02:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.401 09:02:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.401 09:02:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.401 09:02:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.401 09:02:56 -- paths/export.sh@5 -- $ export PATH 00:09:44.401 09:02:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.401 09:02:56 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:09:44.401 09:02:56 -- common/autobuild_common.sh@437 -- $ date +%s 00:09:44.401 09:02:56 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715763776.XXXXXX 00:09:44.402 09:02:56 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715763776.v5eqJb 00:09:44.402 09:02:56 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:09:44.402 09:02:56 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:09:44.402 09:02:56 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:09:44.402 09:02:56 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:09:44.402 09:02:56 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:09:44.402 09:02:56 -- common/autobuild_common.sh@453 -- $ get_config_params 00:09:44.402 09:02:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:09:44.402 09:02:56 -- common/autotest_common.sh@10 -- $ set +x 00:09:44.402 09:02:56 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:09:44.402 09:02:56 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:09:44.402 09:02:56 -- pm/common@17 -- $ local monitor 00:09:44.402 09:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:44.402 09:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:44.402 09:02:56 -- pm/common@21 -- $ date +%s 00:09:44.402 09:02:56 -- pm/common@25 -- $ sleep 1 00:09:44.402 09:02:56 -- pm/common@21 -- $ date +%s 00:09:44.402 09:02:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715763776 00:09:44.402 09:02:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715763776 00:09:44.402 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715763776_collect-vmstat.pm.log 00:09:44.402 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715763776_collect-cpu-load.pm.log 00:09:45.340 09:02:57 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:09:45.340 09:02:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:09:45.340 09:02:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:09:45.340 09:02:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:45.340 09:02:57 -- spdk/autobuild.sh@16 -- $ date -u 00:09:45.599 Wed May 15 09:02:57 AM UTC 2024 00:09:45.599 09:02:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:09:45.599 v24.05-pre-664-g9526734a3 00:09:45.599 09:02:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:09:45.599 09:02:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:09:45.599 09:02:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:09:45.599 09:02:57 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:09:45.599 09:02:57 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:09:45.599 09:02:57 -- common/autotest_common.sh@10 -- $ set +x 00:09:45.599 ************************************ 00:09:45.599 START TEST ubsan 00:09:45.599 ************************************ 00:09:45.599 using ubsan 00:09:45.599 09:02:57 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:09:45.599 00:09:45.599 real 0m0.000s 00:09:45.599 user 0m0.000s 00:09:45.599 sys 0m0.000s 00:09:45.599 09:02:57 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:09:45.599 09:02:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:09:45.599 ************************************ 00:09:45.599 END TEST ubsan 00:09:45.599 ************************************ 00:09:45.599 09:02:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:09:45.599 09:02:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:45.599 09:02:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:45.599 09:02:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:45.599 09:02:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:45.599 09:02:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:45.599 09:02:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:45.599 09:02:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:45.599 09:02:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:09:45.599 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:45.599 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:46.167 Using 'verbs' RDMA provider 00:10:02.420 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:10:17.406 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:10:17.406 Creating mk/config.mk...done. 00:10:17.406 Creating mk/cc.flags.mk...done. 00:10:17.406 Type 'make' to build. 00:10:17.406 09:03:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:10:17.406 09:03:28 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:10:17.406 09:03:28 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:10:17.406 09:03:28 -- common/autotest_common.sh@10 -- $ set +x 00:10:17.406 ************************************ 00:10:17.406 START TEST make 00:10:17.406 ************************************ 00:10:17.406 09:03:28 make -- common/autotest_common.sh@1122 -- $ make -j10 00:10:17.406 make[1]: Nothing to be done for 'all'. 00:10:27.373 The Meson build system 00:10:27.373 Version: 1.3.0 00:10:27.373 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:10:27.373 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:10:27.373 Build type: native build 00:10:27.373 Program cat found: YES (/usr/bin/cat) 00:10:27.373 Project name: DPDK 00:10:27.373 Project version: 23.11.0 00:10:27.373 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:10:27.373 C linker for the host machine: cc ld.bfd 2.39-16 00:10:27.373 Host machine cpu family: x86_64 00:10:27.373 Host machine cpu: x86_64 00:10:27.373 Message: ## Building in Developer Mode ## 00:10:27.373 Program pkg-config found: YES (/usr/bin/pkg-config) 00:10:27.373 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:10:27.373 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:10:27.373 Program python3 found: YES (/usr/bin/python3) 00:10:27.373 Program cat found: YES (/usr/bin/cat) 00:10:27.373 Compiler for C supports arguments -march=native: YES 00:10:27.373 Checking for size of "void *" : 8 00:10:27.373 Checking for size of "void *" : 8 (cached) 00:10:27.373 Library m found: YES 00:10:27.373 Library numa found: YES 00:10:27.373 Has header "numaif.h" : YES 00:10:27.373 Library fdt found: NO 00:10:27.373 Library execinfo found: NO 00:10:27.373 Has header "execinfo.h" : YES 00:10:27.373 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:10:27.373 Run-time dependency libarchive found: NO (tried pkgconfig) 00:10:27.373 Run-time dependency libbsd found: NO (tried pkgconfig) 00:10:27.373 Run-time dependency jansson found: NO (tried pkgconfig) 00:10:27.373 Run-time dependency openssl found: YES 3.0.9 00:10:27.373 Run-time dependency libpcap found: YES 1.10.4 00:10:27.373 Has header "pcap.h" with dependency libpcap: YES 00:10:27.373 Compiler for C supports arguments -Wcast-qual: YES 00:10:27.373 Compiler for C supports arguments -Wdeprecated: YES 00:10:27.373 Compiler for C supports arguments -Wformat: YES 00:10:27.373 Compiler for C supports arguments -Wformat-nonliteral: NO 00:10:27.373 Compiler for C supports arguments -Wformat-security: NO 00:10:27.373 Compiler for C supports arguments -Wmissing-declarations: YES 00:10:27.373 Compiler for C supports arguments -Wmissing-prototypes: YES 00:10:27.373 Compiler for C supports arguments -Wnested-externs: YES 00:10:27.373 Compiler for C supports arguments -Wold-style-definition: YES 00:10:27.373 Compiler for C supports arguments -Wpointer-arith: YES 00:10:27.373 Compiler for C supports arguments -Wsign-compare: YES 00:10:27.373 Compiler for C supports arguments -Wstrict-prototypes: YES 00:10:27.373 Compiler for C supports arguments -Wundef: YES 00:10:27.373 Compiler for C supports arguments -Wwrite-strings: YES 00:10:27.373 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:10:27.373 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:10:27.373 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:10:27.373 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:10:27.373 Program objdump found: YES (/usr/bin/objdump) 00:10:27.373 Compiler for C supports arguments -mavx512f: YES 00:10:27.373 Checking if "AVX512 checking" compiles: YES 00:10:27.373 Fetching value of define "__SSE4_2__" : 1 00:10:27.373 Fetching value of define "__AES__" : 1 00:10:27.373 Fetching value of define "__AVX__" : 1 00:10:27.373 Fetching value of define "__AVX2__" : 1 00:10:27.373 Fetching value of define "__AVX512BW__" : 1 00:10:27.373 Fetching value of define "__AVX512CD__" : 1 00:10:27.373 Fetching value of define "__AVX512DQ__" : 1 00:10:27.373 Fetching value of define "__AVX512F__" : 1 00:10:27.373 Fetching value of define "__AVX512VL__" : 1 00:10:27.373 Fetching value of define "__PCLMUL__" : 1 00:10:27.373 Fetching value of define "__RDRND__" : 1 00:10:27.373 Fetching value of define "__RDSEED__" : 1 00:10:27.373 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:10:27.373 Fetching value of define "__znver1__" : (undefined) 00:10:27.373 Fetching value of define "__znver2__" : (undefined) 00:10:27.373 Fetching value of define "__znver3__" : (undefined) 00:10:27.373 Fetching value of define "__znver4__" : (undefined) 00:10:27.373 Compiler for C supports arguments -Wno-format-truncation: YES 00:10:27.373 Message: lib/log: Defining dependency "log" 00:10:27.373 Message: lib/kvargs: Defining dependency "kvargs" 00:10:27.373 Message: lib/telemetry: Defining dependency "telemetry" 00:10:27.373 Checking for function "getentropy" : NO 00:10:27.373 Message: lib/eal: Defining dependency "eal" 00:10:27.373 Message: lib/ring: Defining dependency "ring" 00:10:27.373 Message: lib/rcu: Defining dependency "rcu" 00:10:27.373 Message: lib/mempool: Defining dependency "mempool" 00:10:27.373 Message: lib/mbuf: Defining dependency "mbuf" 00:10:27.373 Fetching value of define "__PCLMUL__" : 1 (cached) 00:10:27.373 Fetching value of define "__AVX512F__" : 1 (cached) 00:10:27.373 Fetching value of define "__AVX512BW__" : 1 (cached) 00:10:27.373 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:10:27.373 Fetching value of define "__AVX512VL__" : 1 (cached) 00:10:27.373 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:10:27.373 Compiler for C supports arguments -mpclmul: YES 00:10:27.373 Compiler for C supports arguments -maes: YES 00:10:27.373 Compiler for C supports arguments -mavx512f: YES (cached) 00:10:27.373 Compiler for C supports arguments -mavx512bw: YES 00:10:27.373 Compiler for C supports arguments -mavx512dq: YES 00:10:27.373 Compiler for C supports arguments -mavx512vl: YES 00:10:27.373 Compiler for C supports arguments -mvpclmulqdq: YES 00:10:27.373 Compiler for C supports arguments -mavx2: YES 00:10:27.373 Compiler for C supports arguments -mavx: YES 00:10:27.373 Message: lib/net: Defining dependency "net" 00:10:27.373 Message: lib/meter: Defining dependency "meter" 00:10:27.373 Message: lib/ethdev: Defining dependency "ethdev" 00:10:27.373 Message: lib/pci: Defining dependency "pci" 00:10:27.373 Message: lib/cmdline: Defining dependency "cmdline" 00:10:27.373 Message: lib/hash: Defining dependency "hash" 00:10:27.373 Message: lib/timer: Defining dependency "timer" 00:10:27.373 Message: lib/compressdev: Defining dependency "compressdev" 00:10:27.373 Message: lib/cryptodev: Defining dependency "cryptodev" 00:10:27.373 Message: lib/dmadev: Defining dependency "dmadev" 00:10:27.373 Compiler for C supports arguments -Wno-cast-qual: YES 00:10:27.373 Message: lib/power: Defining dependency "power" 00:10:27.373 Message: lib/reorder: Defining dependency "reorder" 00:10:27.373 Message: lib/security: Defining dependency "security" 00:10:27.373 Has header "linux/userfaultfd.h" : YES 00:10:27.373 Has header "linux/vduse.h" : YES 00:10:27.373 Message: lib/vhost: Defining dependency "vhost" 00:10:27.373 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:10:27.373 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:10:27.373 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:10:27.373 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:10:27.373 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:10:27.373 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:10:27.373 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:10:27.373 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:10:27.373 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:10:27.373 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:10:27.373 Program doxygen found: YES (/usr/bin/doxygen) 00:10:27.373 Configuring doxy-api-html.conf using configuration 00:10:27.373 Configuring doxy-api-man.conf using configuration 00:10:27.373 Program mandb found: YES (/usr/bin/mandb) 00:10:27.373 Program sphinx-build found: NO 00:10:27.373 Configuring rte_build_config.h using configuration 00:10:27.373 Message: 00:10:27.373 ================= 00:10:27.373 Applications Enabled 00:10:27.373 ================= 00:10:27.373 00:10:27.373 apps: 00:10:27.373 00:10:27.373 00:10:27.373 Message: 00:10:27.373 ================= 00:10:27.373 Libraries Enabled 00:10:27.373 ================= 00:10:27.373 00:10:27.373 libs: 00:10:27.373 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:10:27.373 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:10:27.373 cryptodev, dmadev, power, reorder, security, vhost, 00:10:27.373 00:10:27.373 Message: 00:10:27.373 =============== 00:10:27.373 Drivers Enabled 00:10:27.374 =============== 00:10:27.374 00:10:27.374 common: 00:10:27.374 00:10:27.374 bus: 00:10:27.374 pci, vdev, 00:10:27.374 mempool: 00:10:27.374 ring, 00:10:27.374 dma: 00:10:27.374 00:10:27.374 net: 00:10:27.374 00:10:27.374 crypto: 00:10:27.374 00:10:27.374 compress: 00:10:27.374 00:10:27.374 vdpa: 00:10:27.374 00:10:27.374 00:10:27.374 Message: 00:10:27.374 ================= 00:10:27.374 Content Skipped 00:10:27.374 ================= 00:10:27.374 00:10:27.374 apps: 00:10:27.374 dumpcap: explicitly disabled via build config 00:10:27.374 graph: explicitly disabled via build config 00:10:27.374 pdump: explicitly disabled via build config 00:10:27.374 proc-info: explicitly disabled via build config 00:10:27.374 test-acl: explicitly disabled via build config 00:10:27.374 test-bbdev: explicitly disabled via build config 00:10:27.374 test-cmdline: explicitly disabled via build config 00:10:27.374 test-compress-perf: explicitly disabled via build config 00:10:27.374 test-crypto-perf: explicitly disabled via build config 00:10:27.374 test-dma-perf: explicitly disabled via build config 00:10:27.374 test-eventdev: explicitly disabled via build config 00:10:27.374 test-fib: explicitly disabled via build config 00:10:27.374 test-flow-perf: explicitly disabled via build config 00:10:27.374 test-gpudev: explicitly disabled via build config 00:10:27.374 test-mldev: explicitly disabled via build config 00:10:27.374 test-pipeline: explicitly disabled via build config 00:10:27.374 test-pmd: explicitly disabled via build config 00:10:27.374 test-regex: explicitly disabled via build config 00:10:27.374 test-sad: explicitly disabled via build config 00:10:27.374 test-security-perf: explicitly disabled via build config 00:10:27.374 00:10:27.374 libs: 00:10:27.374 metrics: explicitly disabled via build config 00:10:27.374 acl: explicitly disabled via build config 00:10:27.374 bbdev: explicitly disabled via build config 00:10:27.374 bitratestats: explicitly disabled via build config 00:10:27.374 bpf: explicitly disabled via build config 00:10:27.374 cfgfile: explicitly disabled via build config 00:10:27.374 distributor: explicitly disabled via build config 00:10:27.374 efd: explicitly disabled via build config 00:10:27.374 eventdev: explicitly disabled via build config 00:10:27.374 dispatcher: explicitly disabled via build config 00:10:27.374 gpudev: explicitly disabled via build config 00:10:27.374 gro: explicitly disabled via build config 00:10:27.374 gso: explicitly disabled via build config 00:10:27.374 ip_frag: explicitly disabled via build config 00:10:27.374 jobstats: explicitly disabled via build config 00:10:27.374 latencystats: explicitly disabled via build config 00:10:27.374 lpm: explicitly disabled via build config 00:10:27.374 member: explicitly disabled via build config 00:10:27.374 pcapng: explicitly disabled via build config 00:10:27.374 rawdev: explicitly disabled via build config 00:10:27.374 regexdev: explicitly disabled via build config 00:10:27.374 mldev: explicitly disabled via build config 00:10:27.374 rib: explicitly disabled via build config 00:10:27.374 sched: explicitly disabled via build config 00:10:27.374 stack: explicitly disabled via build config 00:10:27.374 ipsec: explicitly disabled via build config 00:10:27.374 pdcp: explicitly disabled via build config 00:10:27.374 fib: explicitly disabled via build config 00:10:27.374 port: explicitly disabled via build config 00:10:27.374 pdump: explicitly disabled via build config 00:10:27.374 table: explicitly disabled via build config 00:10:27.374 pipeline: explicitly disabled via build config 00:10:27.374 graph: explicitly disabled via build config 00:10:27.374 node: explicitly disabled via build config 00:10:27.374 00:10:27.374 drivers: 00:10:27.374 common/cpt: not in enabled drivers build config 00:10:27.374 common/dpaax: not in enabled drivers build config 00:10:27.374 common/iavf: not in enabled drivers build config 00:10:27.374 common/idpf: not in enabled drivers build config 00:10:27.374 common/mvep: not in enabled drivers build config 00:10:27.374 common/octeontx: not in enabled drivers build config 00:10:27.374 bus/auxiliary: not in enabled drivers build config 00:10:27.374 bus/cdx: not in enabled drivers build config 00:10:27.374 bus/dpaa: not in enabled drivers build config 00:10:27.374 bus/fslmc: not in enabled drivers build config 00:10:27.374 bus/ifpga: not in enabled drivers build config 00:10:27.374 bus/platform: not in enabled drivers build config 00:10:27.374 bus/vmbus: not in enabled drivers build config 00:10:27.374 common/cnxk: not in enabled drivers build config 00:10:27.374 common/mlx5: not in enabled drivers build config 00:10:27.374 common/nfp: not in enabled drivers build config 00:10:27.374 common/qat: not in enabled drivers build config 00:10:27.374 common/sfc_efx: not in enabled drivers build config 00:10:27.374 mempool/bucket: not in enabled drivers build config 00:10:27.374 mempool/cnxk: not in enabled drivers build config 00:10:27.374 mempool/dpaa: not in enabled drivers build config 00:10:27.374 mempool/dpaa2: not in enabled drivers build config 00:10:27.374 mempool/octeontx: not in enabled drivers build config 00:10:27.374 mempool/stack: not in enabled drivers build config 00:10:27.374 dma/cnxk: not in enabled drivers build config 00:10:27.374 dma/dpaa: not in enabled drivers build config 00:10:27.374 dma/dpaa2: not in enabled drivers build config 00:10:27.374 dma/hisilicon: not in enabled drivers build config 00:10:27.374 dma/idxd: not in enabled drivers build config 00:10:27.374 dma/ioat: not in enabled drivers build config 00:10:27.374 dma/skeleton: not in enabled drivers build config 00:10:27.374 net/af_packet: not in enabled drivers build config 00:10:27.374 net/af_xdp: not in enabled drivers build config 00:10:27.374 net/ark: not in enabled drivers build config 00:10:27.374 net/atlantic: not in enabled drivers build config 00:10:27.374 net/avp: not in enabled drivers build config 00:10:27.374 net/axgbe: not in enabled drivers build config 00:10:27.374 net/bnx2x: not in enabled drivers build config 00:10:27.374 net/bnxt: not in enabled drivers build config 00:10:27.374 net/bonding: not in enabled drivers build config 00:10:27.374 net/cnxk: not in enabled drivers build config 00:10:27.374 net/cpfl: not in enabled drivers build config 00:10:27.374 net/cxgbe: not in enabled drivers build config 00:10:27.374 net/dpaa: not in enabled drivers build config 00:10:27.374 net/dpaa2: not in enabled drivers build config 00:10:27.374 net/e1000: not in enabled drivers build config 00:10:27.374 net/ena: not in enabled drivers build config 00:10:27.374 net/enetc: not in enabled drivers build config 00:10:27.374 net/enetfec: not in enabled drivers build config 00:10:27.374 net/enic: not in enabled drivers build config 00:10:27.374 net/failsafe: not in enabled drivers build config 00:10:27.374 net/fm10k: not in enabled drivers build config 00:10:27.374 net/gve: not in enabled drivers build config 00:10:27.374 net/hinic: not in enabled drivers build config 00:10:27.374 net/hns3: not in enabled drivers build config 00:10:27.374 net/i40e: not in enabled drivers build config 00:10:27.374 net/iavf: not in enabled drivers build config 00:10:27.374 net/ice: not in enabled drivers build config 00:10:27.374 net/idpf: not in enabled drivers build config 00:10:27.374 net/igc: not in enabled drivers build config 00:10:27.374 net/ionic: not in enabled drivers build config 00:10:27.374 net/ipn3ke: not in enabled drivers build config 00:10:27.374 net/ixgbe: not in enabled drivers build config 00:10:27.374 net/mana: not in enabled drivers build config 00:10:27.374 net/memif: not in enabled drivers build config 00:10:27.374 net/mlx4: not in enabled drivers build config 00:10:27.374 net/mlx5: not in enabled drivers build config 00:10:27.374 net/mvneta: not in enabled drivers build config 00:10:27.374 net/mvpp2: not in enabled drivers build config 00:10:27.374 net/netvsc: not in enabled drivers build config 00:10:27.374 net/nfb: not in enabled drivers build config 00:10:27.374 net/nfp: not in enabled drivers build config 00:10:27.374 net/ngbe: not in enabled drivers build config 00:10:27.374 net/null: not in enabled drivers build config 00:10:27.374 net/octeontx: not in enabled drivers build config 00:10:27.374 net/octeon_ep: not in enabled drivers build config 00:10:27.374 net/pcap: not in enabled drivers build config 00:10:27.374 net/pfe: not in enabled drivers build config 00:10:27.374 net/qede: not in enabled drivers build config 00:10:27.374 net/ring: not in enabled drivers build config 00:10:27.374 net/sfc: not in enabled drivers build config 00:10:27.374 net/softnic: not in enabled drivers build config 00:10:27.374 net/tap: not in enabled drivers build config 00:10:27.374 net/thunderx: not in enabled drivers build config 00:10:27.374 net/txgbe: not in enabled drivers build config 00:10:27.374 net/vdev_netvsc: not in enabled drivers build config 00:10:27.374 net/vhost: not in enabled drivers build config 00:10:27.374 net/virtio: not in enabled drivers build config 00:10:27.374 net/vmxnet3: not in enabled drivers build config 00:10:27.374 raw/*: missing internal dependency, "rawdev" 00:10:27.374 crypto/armv8: not in enabled drivers build config 00:10:27.374 crypto/bcmfs: not in enabled drivers build config 00:10:27.374 crypto/caam_jr: not in enabled drivers build config 00:10:27.374 crypto/ccp: not in enabled drivers build config 00:10:27.374 crypto/cnxk: not in enabled drivers build config 00:10:27.374 crypto/dpaa_sec: not in enabled drivers build config 00:10:27.374 crypto/dpaa2_sec: not in enabled drivers build config 00:10:27.374 crypto/ipsec_mb: not in enabled drivers build config 00:10:27.374 crypto/mlx5: not in enabled drivers build config 00:10:27.374 crypto/mvsam: not in enabled drivers build config 00:10:27.374 crypto/nitrox: not in enabled drivers build config 00:10:27.374 crypto/null: not in enabled drivers build config 00:10:27.374 crypto/octeontx: not in enabled drivers build config 00:10:27.374 crypto/openssl: not in enabled drivers build config 00:10:27.374 crypto/scheduler: not in enabled drivers build config 00:10:27.374 crypto/uadk: not in enabled drivers build config 00:10:27.374 crypto/virtio: not in enabled drivers build config 00:10:27.374 compress/isal: not in enabled drivers build config 00:10:27.374 compress/mlx5: not in enabled drivers build config 00:10:27.375 compress/octeontx: not in enabled drivers build config 00:10:27.375 compress/zlib: not in enabled drivers build config 00:10:27.375 regex/*: missing internal dependency, "regexdev" 00:10:27.375 ml/*: missing internal dependency, "mldev" 00:10:27.375 vdpa/ifc: not in enabled drivers build config 00:10:27.375 vdpa/mlx5: not in enabled drivers build config 00:10:27.375 vdpa/nfp: not in enabled drivers build config 00:10:27.375 vdpa/sfc: not in enabled drivers build config 00:10:27.375 event/*: missing internal dependency, "eventdev" 00:10:27.375 baseband/*: missing internal dependency, "bbdev" 00:10:27.375 gpu/*: missing internal dependency, "gpudev" 00:10:27.375 00:10:27.375 00:10:27.375 Build targets in project: 85 00:10:27.375 00:10:27.375 DPDK 23.11.0 00:10:27.375 00:10:27.375 User defined options 00:10:27.375 buildtype : debug 00:10:27.375 default_library : shared 00:10:27.375 libdir : lib 00:10:27.375 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:27.375 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:10:27.375 c_link_args : 00:10:27.375 cpu_instruction_set: native 00:10:27.375 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:10:27.375 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:10:27.375 enable_docs : false 00:10:27.375 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:10:27.375 enable_kmods : false 00:10:27.375 tests : false 00:10:27.375 00:10:27.375 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:27.375 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:10:27.375 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:10:27.375 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:10:27.375 [3/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:10:27.375 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:10:27.375 [5/265] Linking static target lib/librte_kvargs.a 00:10:27.375 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:10:27.375 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:10:27.375 [8/265] Linking static target lib/librte_log.a 00:10:27.375 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:10:27.632 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:10:27.889 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:10:27.889 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:10:27.889 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:10:27.889 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:10:28.147 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:10:28.147 [16/265] Linking static target lib/librte_telemetry.a 00:10:28.147 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:10:28.147 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:10:28.147 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:10:28.404 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:10:28.404 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:10:28.661 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:10:28.661 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:10:28.661 [24/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:10:28.661 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:10:28.661 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:10:28.920 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:10:28.920 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:10:28.920 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:10:28.920 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:10:29.177 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:10:29.177 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:10:29.177 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:10:29.177 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:10:29.177 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:10:29.177 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:10:29.177 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:10:29.436 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:10:29.436 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:10:29.694 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:10:29.694 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:10:29.694 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:10:29.694 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:10:29.694 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:10:29.952 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:10:29.952 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:10:29.952 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:10:30.210 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:10:30.210 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:10:30.210 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:10:30.210 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:10:30.468 [52/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:10:30.468 [53/265] Linking target lib/librte_log.so.24.0 00:10:30.468 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:10:30.468 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:10:30.468 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:10:30.468 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:10:30.468 [58/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:10:30.726 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:10:30.726 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:10:30.726 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:10:30.726 [62/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:10:30.726 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:10:30.726 [64/265] Linking target lib/librte_telemetry.so.24.0 00:10:30.726 [65/265] Linking target lib/librte_kvargs.so.24.0 00:10:30.984 [66/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:10:30.984 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:10:31.242 [68/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:10:31.242 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:10:31.242 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:10:31.242 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:10:31.242 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:10:31.242 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:10:31.242 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:10:31.242 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:10:31.242 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:10:31.550 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:10:31.550 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:10:31.550 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:10:31.550 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:10:31.550 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:10:31.550 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:10:32.115 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:10:32.115 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:10:32.115 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:10:32.115 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:10:32.115 [87/265] Linking static target lib/librte_eal.a 00:10:32.115 [88/265] Linking static target lib/librte_ring.a 00:10:32.115 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:10:32.115 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:10:32.115 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:10:32.115 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:10:32.374 [93/265] Linking static target lib/librte_rcu.a 00:10:32.374 [94/265] Linking static target lib/librte_mempool.a 00:10:32.374 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:10:32.374 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:10:32.374 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:10:32.632 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:10:32.632 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:10:32.632 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:10:32.632 [101/265] Linking static target lib/librte_mbuf.a 00:10:32.890 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:10:32.890 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:10:32.890 [104/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.148 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:10:33.148 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:10:33.148 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:10:33.148 [108/265] Linking static target lib/librte_net.a 00:10:33.148 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:10:33.148 [110/265] Linking static target lib/librte_meter.a 00:10:33.407 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:10:33.407 [112/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.666 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:10:33.666 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:10:33.925 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.925 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.925 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:10:34.183 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:10:34.183 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:10:34.447 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:10:34.704 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:10:34.704 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:10:34.704 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:10:34.962 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:10:34.962 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:10:34.962 [126/265] Linking static target lib/librte_pci.a 00:10:34.962 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:10:35.220 [128/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:10:35.220 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:10:35.220 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:10:35.220 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:10:35.220 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:10:35.220 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:10:35.220 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:10:35.220 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:10:35.220 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:10:35.220 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:10:35.220 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:10:35.479 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:10:35.479 [140/265] Linking static target lib/librte_ethdev.a 00:10:35.479 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:10:35.479 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:10:35.479 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:10:35.479 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:10:35.479 [145/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:35.479 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:10:35.479 [147/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:35.737 [148/265] Linking static target lib/librte_cmdline.a 00:10:35.737 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:10:35.996 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:10:35.996 [151/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:10:35.996 [152/265] Linking static target lib/librte_timer.a 00:10:35.996 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:10:36.255 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:10:36.514 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:10:36.514 [156/265] Linking static target lib/librte_compressdev.a 00:10:36.514 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:10:36.514 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:10:36.514 [159/265] Linking static target lib/librte_hash.a 00:10:36.773 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:10:36.773 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:10:37.032 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:10:37.032 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:10:37.032 [164/265] Linking static target lib/librte_dmadev.a 00:10:37.032 [165/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:10:37.290 [166/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:10:37.290 [167/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:10:37.290 [168/265] Linking static target lib/librte_cryptodev.a 00:10:37.290 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:10:37.290 [170/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:10:37.290 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:10:37.549 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:10:37.808 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:10:37.808 [174/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:10:37.808 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:38.086 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:10:38.086 [177/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:38.086 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:10:38.086 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:10:38.086 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:10:38.351 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:10:38.351 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:10:38.351 [183/265] Linking static target lib/librte_power.a 00:10:38.351 [184/265] Linking static target lib/librte_reorder.a 00:10:38.351 [185/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:10:38.610 [186/265] Linking static target lib/librte_security.a 00:10:38.610 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:10:38.610 [188/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:10:38.610 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:10:38.610 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:10:39.176 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:10:39.176 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:10:39.176 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:10:39.433 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:10:39.690 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:10:39.690 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:10:39.690 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:10:39.690 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:10:39.958 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:10:39.958 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:10:39.958 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:10:39.958 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:10:40.215 [203/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:10:40.215 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:10:40.215 [205/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:10:40.215 [206/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:10:40.215 [207/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:10:40.473 [208/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:40.473 [209/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:10:40.473 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:40.473 [211/265] Linking static target drivers/librte_bus_pci.a 00:10:40.473 [212/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:10:40.473 [213/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:10:40.473 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:10:40.473 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:40.473 [216/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:10:40.473 [217/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:40.473 [218/265] Linking static target drivers/librte_bus_vdev.a 00:10:40.730 [219/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:40.730 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:10:40.730 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:40.730 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:40.730 [223/265] Linking static target drivers/librte_mempool_ring.a 00:10:40.730 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:40.988 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:41.247 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:10:41.247 [227/265] Linking static target lib/librte_vhost.a 00:10:42.623 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:10:45.155 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:46.090 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:10:46.090 [231/265] Linking target lib/librte_eal.so.24.0 00:10:46.414 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:10:46.414 [233/265] Linking target lib/librte_dmadev.so.24.0 00:10:46.414 [234/265] Linking target lib/librte_meter.so.24.0 00:10:46.414 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:10:46.414 [236/265] Linking target lib/librte_ring.so.24.0 00:10:46.414 [237/265] Linking target lib/librte_pci.so.24.0 00:10:46.414 [238/265] Linking target lib/librte_timer.so.24.0 00:10:46.692 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:10:46.692 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:10:46.692 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:10:46.692 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:10:46.692 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:10:46.692 [244/265] Linking target lib/librte_mempool.so.24.0 00:10:46.692 [245/265] Linking target lib/librte_rcu.so.24.0 00:10:46.692 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:10:46.692 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:10:46.692 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:10:46.951 [249/265] Linking target lib/librte_mbuf.so.24.0 00:10:46.951 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:10:46.951 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:10:46.951 [252/265] Linking target lib/librte_net.so.24.0 00:10:46.951 [253/265] Linking target lib/librte_reorder.so.24.0 00:10:46.951 [254/265] Linking target lib/librte_compressdev.so.24.0 00:10:47.209 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:10:47.209 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:10:47.209 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:10:47.209 [258/265] Linking target lib/librte_cmdline.so.24.0 00:10:47.209 [259/265] Linking target lib/librte_security.so.24.0 00:10:47.209 [260/265] Linking target lib/librte_hash.so.24.0 00:10:47.209 [261/265] Linking target lib/librte_ethdev.so.24.0 00:10:47.467 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:10:47.467 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:10:47.467 [264/265] Linking target lib/librte_power.so.24.0 00:10:47.726 [265/265] Linking target lib/librte_vhost.so.24.0 00:10:47.726 INFO: autodetecting backend as ninja 00:10:47.726 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:10:49.101 CC lib/ut/ut.o 00:10:49.101 CC lib/ut_mock/mock.o 00:10:49.101 CC lib/log/log_deprecated.o 00:10:49.101 CC lib/log/log.o 00:10:49.102 CC lib/log/log_flags.o 00:10:49.102 LIB libspdk_ut.a 00:10:49.102 LIB libspdk_ut_mock.a 00:10:49.102 SO libspdk_ut.so.2.0 00:10:49.102 SO libspdk_ut_mock.so.6.0 00:10:49.102 LIB libspdk_log.a 00:10:49.102 SYMLINK libspdk_ut_mock.so 00:10:49.102 SYMLINK libspdk_ut.so 00:10:49.360 SO libspdk_log.so.7.0 00:10:49.360 SYMLINK libspdk_log.so 00:10:49.618 CC lib/util/base64.o 00:10:49.618 CC lib/util/bit_array.o 00:10:49.618 CC lib/dma/dma.o 00:10:49.618 CC lib/util/cpuset.o 00:10:49.618 CC lib/util/crc16.o 00:10:49.618 CC lib/util/crc32.o 00:10:49.618 CC lib/util/crc32c.o 00:10:49.618 CC lib/ioat/ioat.o 00:10:49.618 CXX lib/trace_parser/trace.o 00:10:49.618 CC lib/util/crc32_ieee.o 00:10:49.618 CC lib/vfio_user/host/vfio_user_pci.o 00:10:49.618 CC lib/util/crc64.o 00:10:49.618 CC lib/util/dif.o 00:10:49.877 CC lib/util/fd.o 00:10:49.877 CC lib/util/file.o 00:10:49.877 LIB libspdk_dma.a 00:10:49.877 CC lib/vfio_user/host/vfio_user.o 00:10:49.877 SO libspdk_dma.so.4.0 00:10:49.877 CC lib/util/hexlify.o 00:10:49.877 SYMLINK libspdk_dma.so 00:10:49.877 CC lib/util/iov.o 00:10:49.877 CC lib/util/math.o 00:10:49.877 LIB libspdk_ioat.a 00:10:49.877 CC lib/util/pipe.o 00:10:49.877 CC lib/util/strerror_tls.o 00:10:49.877 SO libspdk_ioat.so.7.0 00:10:50.134 SYMLINK libspdk_ioat.so 00:10:50.134 CC lib/util/string.o 00:10:50.134 CC lib/util/uuid.o 00:10:50.134 CC lib/util/fd_group.o 00:10:50.134 CC lib/util/xor.o 00:10:50.134 CC lib/util/zipf.o 00:10:50.134 LIB libspdk_vfio_user.a 00:10:50.134 SO libspdk_vfio_user.so.5.0 00:10:50.134 SYMLINK libspdk_vfio_user.so 00:10:50.392 LIB libspdk_util.a 00:10:50.392 SO libspdk_util.so.9.0 00:10:50.650 SYMLINK libspdk_util.so 00:10:50.650 LIB libspdk_trace_parser.a 00:10:50.650 SO libspdk_trace_parser.so.5.0 00:10:50.650 CC lib/idxd/idxd.o 00:10:50.650 CC lib/idxd/idxd_user.o 00:10:50.650 CC lib/vmd/vmd.o 00:10:50.650 CC lib/vmd/led.o 00:10:50.650 SYMLINK libspdk_trace_parser.so 00:10:50.650 CC lib/conf/conf.o 00:10:50.650 CC lib/json/json_parse.o 00:10:50.650 CC lib/rdma/common.o 00:10:50.650 CC lib/json/json_util.o 00:10:50.650 CC lib/json/json_write.o 00:10:50.650 CC lib/env_dpdk/env.o 00:10:50.908 CC lib/env_dpdk/memory.o 00:10:50.908 CC lib/env_dpdk/pci.o 00:10:50.908 CC lib/env_dpdk/init.o 00:10:50.908 LIB libspdk_conf.a 00:10:50.908 CC lib/env_dpdk/threads.o 00:10:51.167 SO libspdk_conf.so.6.0 00:10:51.167 LIB libspdk_json.a 00:10:51.167 CC lib/rdma/rdma_verbs.o 00:10:51.167 SO libspdk_json.so.6.0 00:10:51.167 SYMLINK libspdk_conf.so 00:10:51.167 CC lib/env_dpdk/pci_ioat.o 00:10:51.167 SYMLINK libspdk_json.so 00:10:51.167 CC lib/env_dpdk/pci_virtio.o 00:10:51.167 LIB libspdk_idxd.a 00:10:51.167 LIB libspdk_rdma.a 00:10:51.167 SO libspdk_idxd.so.12.0 00:10:51.425 CC lib/env_dpdk/pci_vmd.o 00:10:51.425 SO libspdk_rdma.so.6.0 00:10:51.425 CC lib/env_dpdk/pci_idxd.o 00:10:51.425 SYMLINK libspdk_idxd.so 00:10:51.425 CC lib/env_dpdk/pci_event.o 00:10:51.425 CC lib/env_dpdk/sigbus_handler.o 00:10:51.425 CC lib/env_dpdk/pci_dpdk.o 00:10:51.425 LIB libspdk_vmd.a 00:10:51.425 SYMLINK libspdk_rdma.so 00:10:51.425 CC lib/env_dpdk/pci_dpdk_2207.o 00:10:51.425 CC lib/jsonrpc/jsonrpc_server.o 00:10:51.425 SO libspdk_vmd.so.6.0 00:10:51.425 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:10:51.425 CC lib/jsonrpc/jsonrpc_client.o 00:10:51.425 SYMLINK libspdk_vmd.so 00:10:51.425 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:10:51.425 CC lib/env_dpdk/pci_dpdk_2211.o 00:10:51.684 LIB libspdk_jsonrpc.a 00:10:51.684 SO libspdk_jsonrpc.so.6.0 00:10:51.942 SYMLINK libspdk_jsonrpc.so 00:10:52.201 LIB libspdk_env_dpdk.a 00:10:52.201 CC lib/rpc/rpc.o 00:10:52.201 SO libspdk_env_dpdk.so.14.0 00:10:52.460 SYMLINK libspdk_env_dpdk.so 00:10:52.460 LIB libspdk_rpc.a 00:10:52.460 SO libspdk_rpc.so.6.0 00:10:52.460 SYMLINK libspdk_rpc.so 00:10:52.720 CC lib/trace/trace.o 00:10:52.720 CC lib/trace/trace_flags.o 00:10:52.720 CC lib/trace/trace_rpc.o 00:10:52.720 CC lib/keyring/keyring_rpc.o 00:10:52.720 CC lib/keyring/keyring.o 00:10:52.720 CC lib/notify/notify.o 00:10:52.977 CC lib/notify/notify_rpc.o 00:10:52.977 LIB libspdk_notify.a 00:10:52.977 SO libspdk_notify.so.6.0 00:10:53.234 LIB libspdk_trace.a 00:10:53.234 LIB libspdk_keyring.a 00:10:53.234 SYMLINK libspdk_notify.so 00:10:53.234 SO libspdk_trace.so.10.0 00:10:53.234 SO libspdk_keyring.so.1.0 00:10:53.234 SYMLINK libspdk_keyring.so 00:10:53.234 SYMLINK libspdk_trace.so 00:10:53.493 CC lib/sock/sock.o 00:10:53.493 CC lib/sock/sock_rpc.o 00:10:53.493 CC lib/thread/thread.o 00:10:53.493 CC lib/thread/iobuf.o 00:10:54.060 LIB libspdk_sock.a 00:10:54.060 SO libspdk_sock.so.9.0 00:10:54.060 SYMLINK libspdk_sock.so 00:10:54.316 CC lib/nvme/nvme_ctrlr.o 00:10:54.316 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:54.316 CC lib/nvme/nvme_fabric.o 00:10:54.316 CC lib/nvme/nvme_ns_cmd.o 00:10:54.316 CC lib/nvme/nvme_ns.o 00:10:54.316 CC lib/nvme/nvme_pcie_common.o 00:10:54.316 CC lib/nvme/nvme_pcie.o 00:10:54.316 CC lib/nvme/nvme_qpair.o 00:10:54.316 CC lib/nvme/nvme.o 00:10:55.251 LIB libspdk_thread.a 00:10:55.251 CC lib/nvme/nvme_quirks.o 00:10:55.251 SO libspdk_thread.so.10.0 00:10:55.251 CC lib/nvme/nvme_transport.o 00:10:55.251 SYMLINK libspdk_thread.so 00:10:55.251 CC lib/nvme/nvme_discovery.o 00:10:55.251 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:55.251 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:55.251 CC lib/nvme/nvme_tcp.o 00:10:55.509 CC lib/nvme/nvme_opal.o 00:10:55.509 CC lib/nvme/nvme_io_msg.o 00:10:55.509 CC lib/accel/accel.o 00:10:55.768 CC lib/accel/accel_rpc.o 00:10:55.768 CC lib/accel/accel_sw.o 00:10:56.038 CC lib/nvme/nvme_poll_group.o 00:10:56.038 CC lib/nvme/nvme_zns.o 00:10:56.038 CC lib/blob/blobstore.o 00:10:56.038 CC lib/init/json_config.o 00:10:56.038 CC lib/init/subsystem.o 00:10:56.038 CC lib/virtio/virtio.o 00:10:56.038 CC lib/virtio/virtio_vhost_user.o 00:10:56.296 CC lib/init/subsystem_rpc.o 00:10:56.296 CC lib/init/rpc.o 00:10:56.296 CC lib/virtio/virtio_vfio_user.o 00:10:56.554 CC lib/virtio/virtio_pci.o 00:10:56.554 LIB libspdk_init.a 00:10:56.554 CC lib/nvme/nvme_stubs.o 00:10:56.554 CC lib/blob/request.o 00:10:56.554 SO libspdk_init.so.5.0 00:10:56.554 LIB libspdk_accel.a 00:10:56.554 CC lib/blob/zeroes.o 00:10:56.554 CC lib/blob/blob_bs_dev.o 00:10:56.554 SYMLINK libspdk_init.so 00:10:56.554 SO libspdk_accel.so.15.0 00:10:56.554 CC lib/nvme/nvme_auth.o 00:10:56.554 SYMLINK libspdk_accel.so 00:10:56.812 CC lib/nvme/nvme_cuse.o 00:10:56.812 LIB libspdk_virtio.a 00:10:56.812 CC lib/nvme/nvme_rdma.o 00:10:56.812 SO libspdk_virtio.so.7.0 00:10:56.812 CC lib/event/app.o 00:10:56.812 CC lib/event/reactor.o 00:10:56.812 CC lib/event/log_rpc.o 00:10:56.812 CC lib/bdev/bdev.o 00:10:56.812 SYMLINK libspdk_virtio.so 00:10:56.812 CC lib/bdev/bdev_rpc.o 00:10:57.070 CC lib/bdev/bdev_zone.o 00:10:57.070 CC lib/event/app_rpc.o 00:10:57.070 CC lib/bdev/part.o 00:10:57.330 CC lib/event/scheduler_static.o 00:10:57.330 CC lib/bdev/scsi_nvme.o 00:10:57.330 LIB libspdk_event.a 00:10:57.588 SO libspdk_event.so.13.0 00:10:57.588 SYMLINK libspdk_event.so 00:10:58.155 LIB libspdk_nvme.a 00:10:58.155 SO libspdk_nvme.so.13.0 00:10:58.721 SYMLINK libspdk_nvme.so 00:10:58.721 LIB libspdk_blob.a 00:10:58.986 SO libspdk_blob.so.11.0 00:10:58.986 SYMLINK libspdk_blob.so 00:10:59.244 CC lib/lvol/lvol.o 00:10:59.244 CC lib/blobfs/blobfs.o 00:10:59.244 CC lib/blobfs/tree.o 00:10:59.502 LIB libspdk_bdev.a 00:10:59.502 SO libspdk_bdev.so.15.0 00:10:59.759 SYMLINK libspdk_bdev.so 00:10:59.759 CC lib/nvmf/ctrlr.o 00:11:00.020 CC lib/nvmf/ctrlr_bdev.o 00:11:00.020 CC lib/nvmf/ctrlr_discovery.o 00:11:00.020 CC lib/nvmf/subsystem.o 00:11:00.020 CC lib/scsi/dev.o 00:11:00.020 CC lib/ublk/ublk.o 00:11:00.020 CC lib/ftl/ftl_core.o 00:11:00.020 CC lib/nbd/nbd.o 00:11:00.278 LIB libspdk_lvol.a 00:11:00.278 CC lib/scsi/lun.o 00:11:00.278 SO libspdk_lvol.so.10.0 00:11:00.278 LIB libspdk_blobfs.a 00:11:00.278 SO libspdk_blobfs.so.10.0 00:11:00.278 CC lib/ftl/ftl_init.o 00:11:00.278 SYMLINK libspdk_lvol.so 00:11:00.278 CC lib/ftl/ftl_layout.o 00:11:00.536 SYMLINK libspdk_blobfs.so 00:11:00.536 CC lib/scsi/port.o 00:11:00.536 CC lib/nbd/nbd_rpc.o 00:11:00.536 CC lib/scsi/scsi.o 00:11:00.536 CC lib/nvmf/nvmf.o 00:11:00.536 CC lib/nvmf/nvmf_rpc.o 00:11:00.536 CC lib/nvmf/transport.o 00:11:00.536 LIB libspdk_nbd.a 00:11:00.536 CC lib/scsi/scsi_bdev.o 00:11:00.794 SO libspdk_nbd.so.7.0 00:11:00.794 CC lib/nvmf/tcp.o 00:11:00.794 CC lib/ftl/ftl_debug.o 00:11:00.794 CC lib/ublk/ublk_rpc.o 00:11:00.794 SYMLINK libspdk_nbd.so 00:11:00.794 CC lib/nvmf/stubs.o 00:11:01.052 LIB libspdk_ublk.a 00:11:01.052 SO libspdk_ublk.so.3.0 00:11:01.052 CC lib/ftl/ftl_io.o 00:11:01.052 SYMLINK libspdk_ublk.so 00:11:01.052 CC lib/ftl/ftl_sb.o 00:11:01.312 CC lib/nvmf/mdns_server.o 00:11:01.312 CC lib/scsi/scsi_pr.o 00:11:01.312 CC lib/scsi/scsi_rpc.o 00:11:01.312 CC lib/ftl/ftl_l2p.o 00:11:01.312 CC lib/ftl/ftl_l2p_flat.o 00:11:01.312 CC lib/scsi/task.o 00:11:01.312 CC lib/nvmf/rdma.o 00:11:01.312 CC lib/nvmf/auth.o 00:11:01.570 CC lib/ftl/ftl_nv_cache.o 00:11:01.570 CC lib/ftl/ftl_band.o 00:11:01.570 CC lib/ftl/ftl_band_ops.o 00:11:01.570 CC lib/ftl/ftl_writer.o 00:11:01.570 LIB libspdk_scsi.a 00:11:01.570 CC lib/ftl/ftl_rq.o 00:11:01.570 SO libspdk_scsi.so.9.0 00:11:01.570 CC lib/ftl/ftl_reloc.o 00:11:01.830 SYMLINK libspdk_scsi.so 00:11:01.830 CC lib/ftl/ftl_l2p_cache.o 00:11:01.830 CC lib/ftl/ftl_p2l.o 00:11:01.830 CC lib/ftl/mngt/ftl_mngt.o 00:11:02.093 CC lib/iscsi/conn.o 00:11:02.093 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:02.093 CC lib/iscsi/init_grp.o 00:11:02.093 CC lib/vhost/vhost.o 00:11:02.093 CC lib/vhost/vhost_rpc.o 00:11:02.093 CC lib/iscsi/iscsi.o 00:11:02.356 CC lib/iscsi/md5.o 00:11:02.356 CC lib/iscsi/param.o 00:11:02.356 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:02.356 CC lib/iscsi/portal_grp.o 00:11:02.356 CC lib/iscsi/tgt_node.o 00:11:02.622 CC lib/iscsi/iscsi_subsystem.o 00:11:02.622 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:02.622 CC lib/iscsi/iscsi_rpc.o 00:11:02.622 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:02.622 CC lib/iscsi/task.o 00:11:02.892 CC lib/vhost/vhost_scsi.o 00:11:02.892 CC lib/vhost/vhost_blk.o 00:11:02.892 CC lib/vhost/rte_vhost_user.o 00:11:02.892 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:02.892 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:02.892 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:02.892 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:02.892 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:03.163 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:03.163 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:03.163 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:03.163 CC lib/ftl/utils/ftl_conf.o 00:11:03.163 CC lib/ftl/utils/ftl_md.o 00:11:03.436 CC lib/ftl/utils/ftl_mempool.o 00:11:03.436 CC lib/ftl/utils/ftl_bitmap.o 00:11:03.436 CC lib/ftl/utils/ftl_property.o 00:11:03.436 LIB libspdk_nvmf.a 00:11:03.710 LIB libspdk_iscsi.a 00:11:03.710 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:03.710 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:03.710 SO libspdk_nvmf.so.18.0 00:11:03.710 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:03.710 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:03.710 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:03.710 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:03.710 SO libspdk_iscsi.so.8.0 00:11:03.973 SYMLINK libspdk_nvmf.so 00:11:03.973 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:03.973 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:03.973 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:03.973 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:03.973 CC lib/ftl/base/ftl_base_dev.o 00:11:03.973 CC lib/ftl/base/ftl_base_bdev.o 00:11:03.973 CC lib/ftl/ftl_trace.o 00:11:03.973 LIB libspdk_vhost.a 00:11:03.973 SYMLINK libspdk_iscsi.so 00:11:03.973 SO libspdk_vhost.so.8.0 00:11:04.232 SYMLINK libspdk_vhost.so 00:11:04.232 LIB libspdk_ftl.a 00:11:04.493 SO libspdk_ftl.so.9.0 00:11:05.060 SYMLINK libspdk_ftl.so 00:11:05.317 CC module/env_dpdk/env_dpdk_rpc.o 00:11:05.317 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:11:05.317 CC module/scheduler/gscheduler/gscheduler.o 00:11:05.317 CC module/keyring/file/keyring.o 00:11:05.317 CC module/accel/ioat/accel_ioat.o 00:11:05.574 CC module/sock/uring/uring.o 00:11:05.574 CC module/blob/bdev/blob_bdev.o 00:11:05.574 CC module/scheduler/dynamic/scheduler_dynamic.o 00:11:05.574 CC module/sock/posix/posix.o 00:11:05.574 CC module/accel/error/accel_error.o 00:11:05.574 LIB libspdk_env_dpdk_rpc.a 00:11:05.574 SO libspdk_env_dpdk_rpc.so.6.0 00:11:05.574 CC module/keyring/file/keyring_rpc.o 00:11:05.574 LIB libspdk_scheduler_gscheduler.a 00:11:05.574 LIB libspdk_scheduler_dpdk_governor.a 00:11:05.575 SYMLINK libspdk_env_dpdk_rpc.so 00:11:05.575 SO libspdk_scheduler_gscheduler.so.4.0 00:11:05.575 CC module/accel/error/accel_error_rpc.o 00:11:05.575 CC module/accel/ioat/accel_ioat_rpc.o 00:11:05.575 SO libspdk_scheduler_dpdk_governor.so.4.0 00:11:05.575 LIB libspdk_scheduler_dynamic.a 00:11:05.834 SYMLINK libspdk_scheduler_gscheduler.so 00:11:05.834 SO libspdk_scheduler_dynamic.so.4.0 00:11:05.834 LIB libspdk_keyring_file.a 00:11:05.834 SYMLINK libspdk_scheduler_dpdk_governor.so 00:11:05.834 SYMLINK libspdk_scheduler_dynamic.so 00:11:05.834 LIB libspdk_blob_bdev.a 00:11:05.834 SO libspdk_keyring_file.so.1.0 00:11:05.834 LIB libspdk_accel_error.a 00:11:05.834 SO libspdk_blob_bdev.so.11.0 00:11:05.834 SO libspdk_accel_error.so.2.0 00:11:05.834 LIB libspdk_accel_ioat.a 00:11:05.834 SYMLINK libspdk_keyring_file.so 00:11:05.834 SO libspdk_accel_ioat.so.6.0 00:11:05.834 SYMLINK libspdk_accel_error.so 00:11:05.834 SYMLINK libspdk_blob_bdev.so 00:11:05.834 CC module/accel/dsa/accel_dsa.o 00:11:05.834 CC module/accel/dsa/accel_dsa_rpc.o 00:11:06.094 SYMLINK libspdk_accel_ioat.so 00:11:06.094 CC module/accel/iaa/accel_iaa.o 00:11:06.094 CC module/accel/iaa/accel_iaa_rpc.o 00:11:06.094 LIB libspdk_accel_dsa.a 00:11:06.356 SO libspdk_accel_dsa.so.5.0 00:11:06.356 CC module/blobfs/bdev/blobfs_bdev.o 00:11:06.356 CC module/bdev/gpt/gpt.o 00:11:06.356 CC module/bdev/delay/vbdev_delay.o 00:11:06.356 CC module/bdev/error/vbdev_error.o 00:11:06.356 LIB libspdk_accel_iaa.a 00:11:06.356 SYMLINK libspdk_accel_dsa.so 00:11:06.356 CC module/bdev/delay/vbdev_delay_rpc.o 00:11:06.356 LIB libspdk_sock_uring.a 00:11:06.356 CC module/bdev/lvol/vbdev_lvol.o 00:11:06.356 SO libspdk_accel_iaa.so.3.0 00:11:06.356 SO libspdk_sock_uring.so.5.0 00:11:06.356 LIB libspdk_sock_posix.a 00:11:06.356 CC module/bdev/malloc/bdev_malloc.o 00:11:06.356 SO libspdk_sock_posix.so.6.0 00:11:06.356 SYMLINK libspdk_accel_iaa.so 00:11:06.356 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:11:06.356 SYMLINK libspdk_sock_uring.so 00:11:06.356 CC module/bdev/malloc/bdev_malloc_rpc.o 00:11:06.356 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:11:06.621 CC module/bdev/gpt/vbdev_gpt.o 00:11:06.621 CC module/bdev/error/vbdev_error_rpc.o 00:11:06.621 SYMLINK libspdk_sock_posix.so 00:11:06.621 LIB libspdk_blobfs_bdev.a 00:11:06.621 LIB libspdk_bdev_delay.a 00:11:06.621 SO libspdk_blobfs_bdev.so.6.0 00:11:06.621 SO libspdk_bdev_delay.so.6.0 00:11:06.621 LIB libspdk_bdev_error.a 00:11:06.621 SYMLINK libspdk_blobfs_bdev.so 00:11:06.621 SO libspdk_bdev_error.so.6.0 00:11:06.621 CC module/bdev/null/bdev_null.o 00:11:06.621 CC module/bdev/null/bdev_null_rpc.o 00:11:06.884 SYMLINK libspdk_bdev_delay.so 00:11:06.884 LIB libspdk_bdev_malloc.a 00:11:06.884 CC module/bdev/nvme/bdev_nvme.o 00:11:06.884 SYMLINK libspdk_bdev_error.so 00:11:06.884 SO libspdk_bdev_malloc.so.6.0 00:11:06.885 CC module/bdev/nvme/bdev_nvme_rpc.o 00:11:06.885 LIB libspdk_bdev_gpt.a 00:11:06.885 LIB libspdk_bdev_lvol.a 00:11:06.885 SYMLINK libspdk_bdev_malloc.so 00:11:06.885 SO libspdk_bdev_gpt.so.6.0 00:11:06.885 SO libspdk_bdev_lvol.so.6.0 00:11:06.885 CC module/bdev/passthru/vbdev_passthru.o 00:11:06.885 CC module/bdev/raid/bdev_raid.o 00:11:06.885 LIB libspdk_bdev_null.a 00:11:07.151 SYMLINK libspdk_bdev_gpt.so 00:11:07.151 CC module/bdev/split/vbdev_split.o 00:11:07.151 CC module/bdev/split/vbdev_split_rpc.o 00:11:07.151 SO libspdk_bdev_null.so.6.0 00:11:07.151 SYMLINK libspdk_bdev_lvol.so 00:11:07.151 CC module/bdev/uring/bdev_uring.o 00:11:07.151 SYMLINK libspdk_bdev_null.so 00:11:07.151 CC module/bdev/uring/bdev_uring_rpc.o 00:11:07.151 CC module/bdev/zone_block/vbdev_zone_block.o 00:11:07.151 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:11:07.151 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:11:07.421 CC module/bdev/aio/bdev_aio.o 00:11:07.421 LIB libspdk_bdev_split.a 00:11:07.421 SO libspdk_bdev_split.so.6.0 00:11:07.421 SYMLINK libspdk_bdev_split.so 00:11:07.421 CC module/bdev/aio/bdev_aio_rpc.o 00:11:07.421 LIB libspdk_bdev_passthru.a 00:11:07.421 CC module/bdev/raid/bdev_raid_rpc.o 00:11:07.421 SO libspdk_bdev_passthru.so.6.0 00:11:07.421 LIB libspdk_bdev_uring.a 00:11:07.421 LIB libspdk_bdev_zone_block.a 00:11:07.421 CC module/bdev/ftl/bdev_ftl.o 00:11:07.692 SO libspdk_bdev_uring.so.6.0 00:11:07.692 SO libspdk_bdev_zone_block.so.6.0 00:11:07.692 SYMLINK libspdk_bdev_passthru.so 00:11:07.692 CC module/bdev/ftl/bdev_ftl_rpc.o 00:11:07.692 LIB libspdk_bdev_aio.a 00:11:07.692 SYMLINK libspdk_bdev_zone_block.so 00:11:07.692 CC module/bdev/nvme/nvme_rpc.o 00:11:07.692 SYMLINK libspdk_bdev_uring.so 00:11:07.692 CC module/bdev/nvme/bdev_mdns_client.o 00:11:07.692 SO libspdk_bdev_aio.so.6.0 00:11:07.692 CC module/bdev/raid/bdev_raid_sb.o 00:11:07.692 SYMLINK libspdk_bdev_aio.so 00:11:07.692 CC module/bdev/nvme/vbdev_opal.o 00:11:07.975 CC module/bdev/nvme/vbdev_opal_rpc.o 00:11:07.975 LIB libspdk_bdev_ftl.a 00:11:07.975 CC module/bdev/raid/raid0.o 00:11:07.975 CC module/bdev/iscsi/bdev_iscsi.o 00:11:07.975 SO libspdk_bdev_ftl.so.6.0 00:11:07.975 CC module/bdev/virtio/bdev_virtio_scsi.o 00:11:07.975 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:11:07.975 SYMLINK libspdk_bdev_ftl.so 00:11:07.975 CC module/bdev/raid/raid1.o 00:11:07.975 CC module/bdev/raid/concat.o 00:11:07.975 CC module/bdev/virtio/bdev_virtio_blk.o 00:11:07.975 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:11:07.975 CC module/bdev/virtio/bdev_virtio_rpc.o 00:11:08.235 LIB libspdk_bdev_raid.a 00:11:08.235 LIB libspdk_bdev_iscsi.a 00:11:08.235 SO libspdk_bdev_raid.so.6.0 00:11:08.235 SO libspdk_bdev_iscsi.so.6.0 00:11:08.492 SYMLINK libspdk_bdev_iscsi.so 00:11:08.492 SYMLINK libspdk_bdev_raid.so 00:11:08.492 LIB libspdk_bdev_virtio.a 00:11:08.492 SO libspdk_bdev_virtio.so.6.0 00:11:08.492 SYMLINK libspdk_bdev_virtio.so 00:11:09.059 LIB libspdk_bdev_nvme.a 00:11:09.059 SO libspdk_bdev_nvme.so.7.0 00:11:09.318 SYMLINK libspdk_bdev_nvme.so 00:11:09.888 CC module/event/subsystems/iobuf/iobuf.o 00:11:09.888 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:11:09.888 CC module/event/subsystems/keyring/keyring.o 00:11:09.888 CC module/event/subsystems/scheduler/scheduler.o 00:11:09.888 CC module/event/subsystems/vmd/vmd.o 00:11:09.888 CC module/event/subsystems/vmd/vmd_rpc.o 00:11:09.888 CC module/event/subsystems/sock/sock.o 00:11:09.888 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:11:09.888 LIB libspdk_event_keyring.a 00:11:10.154 LIB libspdk_event_sock.a 00:11:10.154 LIB libspdk_event_scheduler.a 00:11:10.154 LIB libspdk_event_iobuf.a 00:11:10.154 LIB libspdk_event_vhost_blk.a 00:11:10.154 LIB libspdk_event_vmd.a 00:11:10.154 SO libspdk_event_keyring.so.1.0 00:11:10.154 SO libspdk_event_sock.so.5.0 00:11:10.154 SO libspdk_event_scheduler.so.4.0 00:11:10.154 SO libspdk_event_vhost_blk.so.3.0 00:11:10.154 SO libspdk_event_iobuf.so.3.0 00:11:10.154 SO libspdk_event_vmd.so.6.0 00:11:10.154 SYMLINK libspdk_event_sock.so 00:11:10.154 SYMLINK libspdk_event_keyring.so 00:11:10.154 SYMLINK libspdk_event_scheduler.so 00:11:10.154 SYMLINK libspdk_event_vhost_blk.so 00:11:10.154 SYMLINK libspdk_event_iobuf.so 00:11:10.154 SYMLINK libspdk_event_vmd.so 00:11:10.414 CC module/event/subsystems/accel/accel.o 00:11:10.673 LIB libspdk_event_accel.a 00:11:10.673 SO libspdk_event_accel.so.6.0 00:11:10.673 SYMLINK libspdk_event_accel.so 00:11:11.241 CC module/event/subsystems/bdev/bdev.o 00:11:11.500 LIB libspdk_event_bdev.a 00:11:11.500 SO libspdk_event_bdev.so.6.0 00:11:11.500 SYMLINK libspdk_event_bdev.so 00:11:11.758 CC module/event/subsystems/ublk/ublk.o 00:11:11.758 CC module/event/subsystems/nbd/nbd.o 00:11:11.759 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:11:11.759 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:11:11.759 CC module/event/subsystems/scsi/scsi.o 00:11:12.017 LIB libspdk_event_nbd.a 00:11:12.017 LIB libspdk_event_scsi.a 00:11:12.017 LIB libspdk_event_ublk.a 00:11:12.017 SO libspdk_event_nbd.so.6.0 00:11:12.017 SO libspdk_event_ublk.so.3.0 00:11:12.017 SO libspdk_event_scsi.so.6.0 00:11:12.017 LIB libspdk_event_nvmf.a 00:11:12.017 SYMLINK libspdk_event_nbd.so 00:11:12.017 SO libspdk_event_nvmf.so.6.0 00:11:12.017 SYMLINK libspdk_event_ublk.so 00:11:12.017 SYMLINK libspdk_event_scsi.so 00:11:12.275 SYMLINK libspdk_event_nvmf.so 00:11:12.533 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:11:12.533 CC module/event/subsystems/iscsi/iscsi.o 00:11:12.533 LIB libspdk_event_vhost_scsi.a 00:11:12.533 LIB libspdk_event_iscsi.a 00:11:12.533 SO libspdk_event_vhost_scsi.so.3.0 00:11:12.792 SO libspdk_event_iscsi.so.6.0 00:11:12.793 SYMLINK libspdk_event_vhost_scsi.so 00:11:12.793 SYMLINK libspdk_event_iscsi.so 00:11:13.050 SO libspdk.so.6.0 00:11:13.050 SYMLINK libspdk.so 00:11:13.307 CXX app/trace/trace.o 00:11:13.307 CC app/trace_record/trace_record.o 00:11:13.307 CC app/nvmf_tgt/nvmf_main.o 00:11:13.307 CC app/iscsi_tgt/iscsi_tgt.o 00:11:13.307 CC examples/accel/perf/accel_perf.o 00:11:13.307 CC app/spdk_tgt/spdk_tgt.o 00:11:13.307 CC examples/ioat/perf/perf.o 00:11:13.307 CC test/accel/dif/dif.o 00:11:13.307 CC examples/blob/hello_world/hello_blob.o 00:11:13.563 CC examples/bdev/hello_world/hello_bdev.o 00:11:13.563 LINK nvmf_tgt 00:11:13.563 LINK iscsi_tgt 00:11:13.563 LINK spdk_trace_record 00:11:13.563 LINK ioat_perf 00:11:13.563 LINK spdk_tgt 00:11:13.563 LINK hello_blob 00:11:13.821 LINK hello_bdev 00:11:13.821 LINK spdk_trace 00:11:13.821 LINK accel_perf 00:11:13.821 LINK dif 00:11:13.821 CC examples/ioat/verify/verify.o 00:11:13.821 CC examples/bdev/bdevperf/bdevperf.o 00:11:14.079 CC test/app/bdev_svc/bdev_svc.o 00:11:14.079 CC app/spdk_lspci/spdk_lspci.o 00:11:14.079 CC test/bdev/bdevio/bdevio.o 00:11:14.079 CC app/spdk_nvme_perf/perf.o 00:11:14.079 CC examples/blob/cli/blobcli.o 00:11:14.079 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:11:14.079 LINK verify 00:11:14.079 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:11:14.079 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:11:14.079 LINK bdev_svc 00:11:14.337 LINK spdk_lspci 00:11:14.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:11:14.337 LINK bdevio 00:11:14.595 LINK nvme_fuzz 00:11:14.595 TEST_HEADER include/spdk/accel.h 00:11:14.595 TEST_HEADER include/spdk/accel_module.h 00:11:14.595 TEST_HEADER include/spdk/assert.h 00:11:14.595 TEST_HEADER include/spdk/barrier.h 00:11:14.595 TEST_HEADER include/spdk/base64.h 00:11:14.595 TEST_HEADER include/spdk/bdev.h 00:11:14.595 TEST_HEADER include/spdk/bdev_module.h 00:11:14.595 LINK blobcli 00:11:14.595 TEST_HEADER include/spdk/bdev_zone.h 00:11:14.595 TEST_HEADER include/spdk/bit_array.h 00:11:14.595 TEST_HEADER include/spdk/bit_pool.h 00:11:14.595 TEST_HEADER include/spdk/blob_bdev.h 00:11:14.595 TEST_HEADER include/spdk/blobfs_bdev.h 00:11:14.595 TEST_HEADER include/spdk/blobfs.h 00:11:14.595 TEST_HEADER include/spdk/blob.h 00:11:14.595 TEST_HEADER include/spdk/conf.h 00:11:14.595 TEST_HEADER include/spdk/config.h 00:11:14.595 TEST_HEADER include/spdk/cpuset.h 00:11:14.595 TEST_HEADER include/spdk/crc16.h 00:11:14.595 TEST_HEADER include/spdk/crc32.h 00:11:14.595 TEST_HEADER include/spdk/crc64.h 00:11:14.595 TEST_HEADER include/spdk/dif.h 00:11:14.595 TEST_HEADER include/spdk/dma.h 00:11:14.595 TEST_HEADER include/spdk/endian.h 00:11:14.595 TEST_HEADER include/spdk/env_dpdk.h 00:11:14.595 TEST_HEADER include/spdk/env.h 00:11:14.595 TEST_HEADER include/spdk/event.h 00:11:14.595 TEST_HEADER include/spdk/fd_group.h 00:11:14.595 TEST_HEADER include/spdk/fd.h 00:11:14.595 TEST_HEADER include/spdk/file.h 00:11:14.595 TEST_HEADER include/spdk/ftl.h 00:11:14.595 TEST_HEADER include/spdk/gpt_spec.h 00:11:14.595 TEST_HEADER include/spdk/hexlify.h 00:11:14.595 TEST_HEADER include/spdk/histogram_data.h 00:11:14.595 TEST_HEADER include/spdk/idxd.h 00:11:14.595 TEST_HEADER include/spdk/idxd_spec.h 00:11:14.595 TEST_HEADER include/spdk/init.h 00:11:14.595 TEST_HEADER include/spdk/ioat.h 00:11:14.595 TEST_HEADER include/spdk/ioat_spec.h 00:11:14.595 TEST_HEADER include/spdk/iscsi_spec.h 00:11:14.595 TEST_HEADER include/spdk/json.h 00:11:14.595 TEST_HEADER include/spdk/jsonrpc.h 00:11:14.595 TEST_HEADER include/spdk/keyring.h 00:11:14.595 TEST_HEADER include/spdk/keyring_module.h 00:11:14.595 TEST_HEADER include/spdk/likely.h 00:11:14.595 TEST_HEADER include/spdk/log.h 00:11:14.595 LINK vhost_fuzz 00:11:14.595 TEST_HEADER include/spdk/lvol.h 00:11:14.595 TEST_HEADER include/spdk/memory.h 00:11:14.595 TEST_HEADER include/spdk/mmio.h 00:11:14.595 CC test/blobfs/mkfs/mkfs.o 00:11:14.595 TEST_HEADER include/spdk/nbd.h 00:11:14.595 TEST_HEADER include/spdk/notify.h 00:11:14.595 TEST_HEADER include/spdk/nvme.h 00:11:14.595 TEST_HEADER include/spdk/nvme_intel.h 00:11:14.595 TEST_HEADER include/spdk/nvme_ocssd.h 00:11:14.595 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:11:14.595 TEST_HEADER include/spdk/nvme_spec.h 00:11:14.853 TEST_HEADER include/spdk/nvme_zns.h 00:11:14.853 TEST_HEADER include/spdk/nvmf_cmd.h 00:11:14.853 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:11:14.853 TEST_HEADER include/spdk/nvmf.h 00:11:14.853 CC test/dma/test_dma/test_dma.o 00:11:14.853 TEST_HEADER include/spdk/nvmf_spec.h 00:11:14.853 TEST_HEADER include/spdk/nvmf_transport.h 00:11:14.853 TEST_HEADER include/spdk/opal.h 00:11:14.853 TEST_HEADER include/spdk/opal_spec.h 00:11:14.853 TEST_HEADER include/spdk/pci_ids.h 00:11:14.853 TEST_HEADER include/spdk/pipe.h 00:11:14.853 TEST_HEADER include/spdk/queue.h 00:11:14.853 TEST_HEADER include/spdk/reduce.h 00:11:14.853 TEST_HEADER include/spdk/rpc.h 00:11:14.853 TEST_HEADER include/spdk/scheduler.h 00:11:14.853 TEST_HEADER include/spdk/scsi.h 00:11:14.853 TEST_HEADER include/spdk/scsi_spec.h 00:11:14.853 TEST_HEADER include/spdk/sock.h 00:11:14.853 TEST_HEADER include/spdk/stdinc.h 00:11:14.853 TEST_HEADER include/spdk/string.h 00:11:14.853 TEST_HEADER include/spdk/thread.h 00:11:14.853 TEST_HEADER include/spdk/trace.h 00:11:14.853 TEST_HEADER include/spdk/trace_parser.h 00:11:14.853 TEST_HEADER include/spdk/tree.h 00:11:14.853 TEST_HEADER include/spdk/ublk.h 00:11:14.853 TEST_HEADER include/spdk/util.h 00:11:14.853 TEST_HEADER include/spdk/uuid.h 00:11:14.853 TEST_HEADER include/spdk/version.h 00:11:14.853 TEST_HEADER include/spdk/vfio_user_pci.h 00:11:14.853 TEST_HEADER include/spdk/vfio_user_spec.h 00:11:14.853 TEST_HEADER include/spdk/vhost.h 00:11:14.853 TEST_HEADER include/spdk/vmd.h 00:11:14.853 TEST_HEADER include/spdk/xor.h 00:11:14.853 TEST_HEADER include/spdk/zipf.h 00:11:14.853 CXX test/cpp_headers/accel.o 00:11:14.854 CC test/app/histogram_perf/histogram_perf.o 00:11:14.854 CC test/app/jsoncat/jsoncat.o 00:11:14.854 LINK mkfs 00:11:14.854 LINK bdevperf 00:11:14.854 LINK spdk_nvme_perf 00:11:15.111 LINK histogram_perf 00:11:15.111 LINK jsoncat 00:11:15.111 CC examples/nvme/hello_world/hello_world.o 00:11:15.111 CXX test/cpp_headers/accel_module.o 00:11:15.111 CC examples/sock/hello_world/hello_sock.o 00:11:15.111 LINK test_dma 00:11:15.111 CXX test/cpp_headers/assert.o 00:11:15.370 CC examples/nvme/reconnect/reconnect.o 00:11:15.370 CC app/spdk_nvme_identify/identify.o 00:11:15.370 LINK hello_world 00:11:15.370 CC examples/nvme/nvme_manage/nvme_manage.o 00:11:15.370 CXX test/cpp_headers/barrier.o 00:11:15.370 CC app/spdk_nvme_discover/discovery_aer.o 00:11:15.370 LINK hello_sock 00:11:15.370 CXX test/cpp_headers/base64.o 00:11:15.626 CC test/env/mem_callbacks/mem_callbacks.o 00:11:15.626 CXX test/cpp_headers/bdev.o 00:11:15.626 LINK spdk_nvme_discover 00:11:15.626 CC app/spdk_top/spdk_top.o 00:11:15.626 LINK reconnect 00:11:15.626 CC app/spdk_dd/spdk_dd.o 00:11:15.626 CC app/vhost/vhost.o 00:11:15.626 CXX test/cpp_headers/bdev_module.o 00:11:15.883 LINK iscsi_fuzz 00:11:15.883 LINK nvme_manage 00:11:15.883 CC examples/nvme/arbitration/arbitration.o 00:11:15.883 LINK vhost 00:11:15.883 CXX test/cpp_headers/bdev_zone.o 00:11:15.883 CC examples/vmd/lsvmd/lsvmd.o 00:11:15.883 CXX test/cpp_headers/bit_array.o 00:11:16.141 LINK spdk_dd 00:11:16.141 LINK spdk_nvme_identify 00:11:16.141 LINK mem_callbacks 00:11:16.141 CXX test/cpp_headers/bit_pool.o 00:11:16.141 LINK lsvmd 00:11:16.141 CC test/app/stub/stub.o 00:11:16.141 LINK arbitration 00:11:16.141 CXX test/cpp_headers/blob_bdev.o 00:11:16.141 CXX test/cpp_headers/blobfs_bdev.o 00:11:16.399 CC test/env/vtophys/vtophys.o 00:11:16.399 LINK stub 00:11:16.399 CC app/fio/nvme/fio_plugin.o 00:11:16.399 CC examples/nvmf/nvmf/nvmf.o 00:11:16.399 LINK spdk_top 00:11:16.399 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:11:16.399 CC examples/vmd/led/led.o 00:11:16.399 CXX test/cpp_headers/blobfs.o 00:11:16.399 CC examples/nvme/hotplug/hotplug.o 00:11:16.399 LINK vtophys 00:11:16.658 CXX test/cpp_headers/blob.o 00:11:16.658 CXX test/cpp_headers/conf.o 00:11:16.658 LINK env_dpdk_post_init 00:11:16.658 CC test/env/memory/memory_ut.o 00:11:16.658 LINK led 00:11:16.658 CXX test/cpp_headers/config.o 00:11:16.658 LINK hotplug 00:11:16.658 CXX test/cpp_headers/cpuset.o 00:11:16.658 LINK nvmf 00:11:16.917 CXX test/cpp_headers/crc16.o 00:11:16.917 CC examples/nvme/abort/abort.o 00:11:16.917 CC examples/nvme/cmb_copy/cmb_copy.o 00:11:16.917 CXX test/cpp_headers/crc32.o 00:11:16.917 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:11:16.917 CXX test/cpp_headers/crc64.o 00:11:16.917 LINK spdk_nvme 00:11:16.917 LINK cmb_copy 00:11:16.917 LINK pmr_persistence 00:11:17.175 CXX test/cpp_headers/dif.o 00:11:17.175 CC test/rpc_client/rpc_client_test.o 00:11:17.175 CC test/event/event_perf/event_perf.o 00:11:17.175 CC app/fio/bdev/fio_plugin.o 00:11:17.175 CC test/nvme/aer/aer.o 00:11:17.175 CXX test/cpp_headers/dma.o 00:11:17.175 LINK abort 00:11:17.175 CXX test/cpp_headers/endian.o 00:11:17.175 CC test/lvol/esnap/esnap.o 00:11:17.175 LINK rpc_client_test 00:11:17.433 LINK event_perf 00:11:17.433 CXX test/cpp_headers/env_dpdk.o 00:11:17.433 CC test/nvme/reset/reset.o 00:11:17.433 CXX test/cpp_headers/env.o 00:11:17.433 LINK aer 00:11:17.433 LINK memory_ut 00:11:17.433 CC test/nvme/sgl/sgl.o 00:11:17.691 CC test/event/reactor/reactor.o 00:11:17.691 CC examples/util/zipf/zipf.o 00:11:17.691 LINK spdk_bdev 00:11:17.691 CXX test/cpp_headers/event.o 00:11:17.691 CXX test/cpp_headers/fd_group.o 00:11:17.691 CC test/event/reactor_perf/reactor_perf.o 00:11:17.691 LINK reset 00:11:17.691 LINK reactor 00:11:17.691 CXX test/cpp_headers/fd.o 00:11:17.691 LINK zipf 00:11:17.949 LINK sgl 00:11:17.949 LINK reactor_perf 00:11:17.949 CC test/env/pci/pci_ut.o 00:11:17.949 CXX test/cpp_headers/file.o 00:11:17.949 CXX test/cpp_headers/ftl.o 00:11:17.949 CC examples/interrupt_tgt/interrupt_tgt.o 00:11:17.949 CC test/nvme/e2edp/nvme_dp.o 00:11:17.949 CC examples/thread/thread/thread_ex.o 00:11:17.949 CC examples/idxd/perf/perf.o 00:11:18.207 CC test/nvme/overhead/overhead.o 00:11:18.207 CC test/event/app_repeat/app_repeat.o 00:11:18.207 CXX test/cpp_headers/gpt_spec.o 00:11:18.207 CC test/nvme/err_injection/err_injection.o 00:11:18.207 LINK interrupt_tgt 00:11:18.207 LINK pci_ut 00:11:18.207 LINK app_repeat 00:11:18.207 LINK thread 00:11:18.207 LINK nvme_dp 00:11:18.465 CXX test/cpp_headers/hexlify.o 00:11:18.465 LINK err_injection 00:11:18.465 LINK idxd_perf 00:11:18.465 LINK overhead 00:11:18.465 CXX test/cpp_headers/histogram_data.o 00:11:18.465 CXX test/cpp_headers/idxd.o 00:11:18.465 CXX test/cpp_headers/idxd_spec.o 00:11:18.465 CXX test/cpp_headers/init.o 00:11:18.725 CXX test/cpp_headers/ioat.o 00:11:18.725 CC test/nvme/reserve/reserve.o 00:11:18.725 CC test/event/scheduler/scheduler.o 00:11:18.725 CC test/nvme/startup/startup.o 00:11:18.984 CC test/nvme/simple_copy/simple_copy.o 00:11:18.984 CXX test/cpp_headers/ioat_spec.o 00:11:18.984 CC test/thread/poller_perf/poller_perf.o 00:11:18.984 LINK startup 00:11:18.984 LINK scheduler 00:11:18.984 CC test/nvme/connect_stress/connect_stress.o 00:11:18.984 LINK reserve 00:11:18.984 CC test/nvme/compliance/nvme_compliance.o 00:11:19.242 CC test/nvme/boot_partition/boot_partition.o 00:11:19.242 CXX test/cpp_headers/iscsi_spec.o 00:11:19.242 LINK poller_perf 00:11:19.242 LINK simple_copy 00:11:19.242 CXX test/cpp_headers/json.o 00:11:19.242 LINK connect_stress 00:11:19.242 CXX test/cpp_headers/jsonrpc.o 00:11:19.242 LINK boot_partition 00:11:19.242 CXX test/cpp_headers/keyring.o 00:11:19.501 CXX test/cpp_headers/keyring_module.o 00:11:19.501 CC test/nvme/fused_ordering/fused_ordering.o 00:11:19.501 CC test/nvme/doorbell_aers/doorbell_aers.o 00:11:19.501 CXX test/cpp_headers/likely.o 00:11:19.501 LINK nvme_compliance 00:11:19.501 CXX test/cpp_headers/log.o 00:11:19.501 CXX test/cpp_headers/lvol.o 00:11:19.501 CC test/nvme/fdp/fdp.o 00:11:19.501 CC test/nvme/cuse/cuse.o 00:11:19.501 CXX test/cpp_headers/memory.o 00:11:19.759 CXX test/cpp_headers/mmio.o 00:11:19.759 LINK fused_ordering 00:11:19.759 CXX test/cpp_headers/nbd.o 00:11:19.759 CXX test/cpp_headers/notify.o 00:11:19.759 LINK doorbell_aers 00:11:19.759 CXX test/cpp_headers/nvme.o 00:11:19.759 CXX test/cpp_headers/nvme_intel.o 00:11:19.759 CXX test/cpp_headers/nvme_ocssd.o 00:11:19.759 CXX test/cpp_headers/nvme_ocssd_spec.o 00:11:19.759 LINK fdp 00:11:19.759 CXX test/cpp_headers/nvme_spec.o 00:11:19.759 CXX test/cpp_headers/nvme_zns.o 00:11:20.016 CXX test/cpp_headers/nvmf_cmd.o 00:11:20.016 CXX test/cpp_headers/nvmf_fc_spec.o 00:11:20.016 CXX test/cpp_headers/nvmf.o 00:11:20.017 CXX test/cpp_headers/nvmf_spec.o 00:11:20.017 CXX test/cpp_headers/nvmf_transport.o 00:11:20.017 CXX test/cpp_headers/opal.o 00:11:20.017 CXX test/cpp_headers/opal_spec.o 00:11:20.017 CXX test/cpp_headers/pci_ids.o 00:11:20.017 CXX test/cpp_headers/pipe.o 00:11:20.017 CXX test/cpp_headers/queue.o 00:11:20.274 CXX test/cpp_headers/reduce.o 00:11:20.274 CXX test/cpp_headers/rpc.o 00:11:20.274 CXX test/cpp_headers/scheduler.o 00:11:20.274 CXX test/cpp_headers/scsi.o 00:11:20.274 CXX test/cpp_headers/scsi_spec.o 00:11:20.274 CXX test/cpp_headers/sock.o 00:11:20.274 CXX test/cpp_headers/stdinc.o 00:11:20.274 CXX test/cpp_headers/string.o 00:11:20.274 CXX test/cpp_headers/thread.o 00:11:20.274 CXX test/cpp_headers/trace.o 00:11:20.274 CXX test/cpp_headers/trace_parser.o 00:11:20.532 CXX test/cpp_headers/tree.o 00:11:20.532 CXX test/cpp_headers/ublk.o 00:11:20.532 CXX test/cpp_headers/util.o 00:11:20.532 CXX test/cpp_headers/uuid.o 00:11:20.532 CXX test/cpp_headers/version.o 00:11:20.532 CXX test/cpp_headers/vfio_user_pci.o 00:11:20.532 CXX test/cpp_headers/vfio_user_spec.o 00:11:20.532 CXX test/cpp_headers/vhost.o 00:11:20.532 CXX test/cpp_headers/vmd.o 00:11:20.532 CXX test/cpp_headers/xor.o 00:11:20.532 LINK cuse 00:11:20.791 CXX test/cpp_headers/zipf.o 00:11:22.163 LINK esnap 00:11:22.731 00:11:22.731 real 1m6.357s 00:11:22.731 user 5m57.051s 00:11:22.731 sys 1m56.212s 00:11:22.731 09:04:35 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:11:22.731 ************************************ 00:11:22.731 END TEST make 00:11:22.731 ************************************ 00:11:22.731 09:04:35 make -- common/autotest_common.sh@10 -- $ set +x 00:11:22.731 09:04:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:11:22.731 09:04:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:11:22.731 09:04:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:11:22.731 09:04:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.731 09:04:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:11:22.731 09:04:35 -- pm/common@44 -- $ pid=5066 00:11:22.731 09:04:35 -- pm/common@50 -- $ kill -TERM 5066 00:11:22.731 09:04:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.731 09:04:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:11:22.731 09:04:35 -- pm/common@44 -- $ pid=5068 00:11:22.731 09:04:35 -- pm/common@50 -- $ kill -TERM 5068 00:11:22.731 09:04:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.731 09:04:35 -- nvmf/common.sh@7 -- # uname -s 00:11:22.731 09:04:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.731 09:04:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.731 09:04:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.731 09:04:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.731 09:04:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.731 09:04:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.731 09:04:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.731 09:04:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.731 09:04:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.731 09:04:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.990 09:04:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:11:22.990 09:04:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:11:22.990 09:04:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.990 09:04:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.990 09:04:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.990 09:04:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.990 09:04:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.990 09:04:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.990 09:04:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.990 09:04:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.990 09:04:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.990 09:04:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.990 09:04:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.990 09:04:35 -- paths/export.sh@5 -- # export PATH 00:11:22.990 09:04:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.990 09:04:35 -- nvmf/common.sh@47 -- # : 0 00:11:22.990 09:04:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.990 09:04:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.990 09:04:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.990 09:04:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.990 09:04:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.990 09:04:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.990 09:04:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.990 09:04:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.990 09:04:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:11:22.990 09:04:35 -- spdk/autotest.sh@32 -- # uname -s 00:11:22.990 09:04:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:11:22.990 09:04:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:11:22.990 09:04:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:22.990 09:04:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:11:22.990 09:04:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:22.990 09:04:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:11:22.990 09:04:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:11:22.990 09:04:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:11:22.990 09:04:35 -- spdk/autotest.sh@48 -- # udevadm_pid=52108 00:11:22.990 09:04:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:11:22.990 09:04:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:11:22.990 09:04:35 -- pm/common@17 -- # local monitor 00:11:22.990 09:04:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.990 09:04:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.990 09:04:35 -- pm/common@25 -- # sleep 1 00:11:22.990 09:04:35 -- pm/common@21 -- # date +%s 00:11:22.990 09:04:35 -- pm/common@21 -- # date +%s 00:11:22.990 09:04:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715763875 00:11:22.990 09:04:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715763875 00:11:22.990 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715763875_collect-cpu-load.pm.log 00:11:22.990 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715763875_collect-vmstat.pm.log 00:11:23.927 09:04:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:11:23.927 09:04:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:11:23.927 09:04:36 -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:23.927 09:04:36 -- common/autotest_common.sh@10 -- # set +x 00:11:23.927 09:04:36 -- spdk/autotest.sh@59 -- # create_test_list 00:11:23.927 09:04:36 -- common/autotest_common.sh@745 -- # xtrace_disable 00:11:23.927 09:04:36 -- common/autotest_common.sh@10 -- # set +x 00:11:23.927 09:04:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:11:23.927 09:04:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:11:23.927 09:04:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:11:23.927 09:04:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:11:23.927 09:04:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:11:23.927 09:04:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:11:23.927 09:04:36 -- common/autotest_common.sh@1452 -- # uname 00:11:23.927 09:04:36 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:11:23.927 09:04:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:11:23.927 09:04:36 -- common/autotest_common.sh@1472 -- # uname 00:11:23.927 09:04:36 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:11:23.927 09:04:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:11:23.927 09:04:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:11:23.927 09:04:36 -- spdk/autotest.sh@72 -- # hash lcov 00:11:23.928 09:04:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:11:23.928 09:04:36 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:11:23.928 --rc lcov_branch_coverage=1 00:11:23.928 --rc lcov_function_coverage=1 00:11:23.928 --rc genhtml_branch_coverage=1 00:11:23.928 --rc genhtml_function_coverage=1 00:11:23.928 --rc genhtml_legend=1 00:11:23.928 --rc geninfo_all_blocks=1 00:11:23.928 ' 00:11:23.928 09:04:36 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:11:23.928 --rc lcov_branch_coverage=1 00:11:23.928 --rc lcov_function_coverage=1 00:11:23.928 --rc genhtml_branch_coverage=1 00:11:23.928 --rc genhtml_function_coverage=1 00:11:23.928 --rc genhtml_legend=1 00:11:23.928 --rc geninfo_all_blocks=1 00:11:23.928 ' 00:11:23.928 09:04:36 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:11:23.928 --rc lcov_branch_coverage=1 00:11:23.928 --rc lcov_function_coverage=1 00:11:23.928 --rc genhtml_branch_coverage=1 00:11:23.928 --rc genhtml_function_coverage=1 00:11:23.928 --rc genhtml_legend=1 00:11:23.928 --rc geninfo_all_blocks=1 00:11:23.928 --no-external' 00:11:23.928 09:04:36 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:11:23.928 --rc lcov_branch_coverage=1 00:11:23.928 --rc lcov_function_coverage=1 00:11:23.928 --rc genhtml_branch_coverage=1 00:11:23.928 --rc genhtml_function_coverage=1 00:11:23.928 --rc genhtml_legend=1 00:11:23.928 --rc geninfo_all_blocks=1 00:11:23.928 --no-external' 00:11:23.928 09:04:36 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:11:24.186 lcov: LCOV version 1.14 00:11:24.186 09:04:36 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:11:34.153 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:11:34.153 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:11:34.153 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:11:34.153 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:11:34.153 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:11:34.153 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:11:39.418 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:11:39.418 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:11:54.351 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:11:54.351 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:11:54.352 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:11:54.352 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:11:56.886 09:05:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:11:56.886 09:05:09 -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:56.887 09:05:09 -- common/autotest_common.sh@10 -- # set +x 00:11:56.887 09:05:09 -- spdk/autotest.sh@91 -- # rm -f 00:11:56.887 09:05:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:57.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.460 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:57.460 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:57.460 09:05:09 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:57.460 09:05:09 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:11:57.460 09:05:09 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:11:57.460 09:05:09 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:11:57.461 09:05:09 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:57.461 09:05:09 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:11:57.461 09:05:09 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:11:57.461 09:05:09 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:57.461 09:05:09 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:11:57.461 09:05:09 -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:11:57.461 09:05:09 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:57.461 09:05:09 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n2 00:11:57.461 09:05:09 -- common/autotest_common.sh@1659 -- # local device=nvme1n2 00:11:57.461 09:05:09 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:57.461 09:05:09 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n3 00:11:57.461 09:05:09 -- common/autotest_common.sh@1659 -- # local device=nvme1n3 00:11:57.461 09:05:09 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:57.461 09:05:09 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:57.461 09:05:09 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:57.461 09:05:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.461 09:05:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.461 09:05:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:57.461 09:05:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:57.461 09:05:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:57.718 No valid GPT data, bailing 00:11:57.718 09:05:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:57.718 09:05:09 -- scripts/common.sh@391 -- # pt= 00:11:57.718 09:05:09 -- scripts/common.sh@392 -- # return 1 00:11:57.718 09:05:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:57.718 1+0 records in 00:11:57.718 1+0 records out 00:11:57.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588451 s, 178 MB/s 00:11:57.718 09:05:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.718 09:05:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.718 09:05:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:11:57.718 09:05:09 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:11:57.718 09:05:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:57.718 No valid GPT data, bailing 00:11:57.718 09:05:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:57.718 09:05:10 -- scripts/common.sh@391 -- # pt= 00:11:57.718 09:05:10 -- scripts/common.sh@392 -- # return 1 00:11:57.718 09:05:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:57.718 1+0 records in 00:11:57.718 1+0 records out 00:11:57.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434275 s, 241 MB/s 00:11:57.718 09:05:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.718 09:05:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.718 09:05:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:11:57.718 09:05:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:11:57.718 09:05:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:57.718 No valid GPT data, bailing 00:11:57.718 09:05:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:57.718 09:05:10 -- scripts/common.sh@391 -- # pt= 00:11:57.718 09:05:10 -- scripts/common.sh@392 -- # return 1 00:11:57.718 09:05:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:57.718 1+0 records in 00:11:57.719 1+0 records out 00:11:57.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00778604 s, 135 MB/s 00:11:57.719 09:05:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.719 09:05:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.719 09:05:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:11:57.719 09:05:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:11:57.719 09:05:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:57.977 No valid GPT data, bailing 00:11:57.977 09:05:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:57.977 09:05:10 -- scripts/common.sh@391 -- # pt= 00:11:57.977 09:05:10 -- scripts/common.sh@392 -- # return 1 00:11:57.977 09:05:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:57.977 1+0 records in 00:11:57.977 1+0 records out 00:11:57.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444331 s, 236 MB/s 00:11:57.977 09:05:10 -- spdk/autotest.sh@118 -- # sync 00:11:57.977 09:05:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:57.977 09:05:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:57.977 09:05:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:59.351 09:05:11 -- spdk/autotest.sh@124 -- # uname -s 00:11:59.351 09:05:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:59.351 09:05:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:59.351 09:05:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:59.351 09:05:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:59.351 09:05:11 -- common/autotest_common.sh@10 -- # set +x 00:11:59.609 ************************************ 00:11:59.609 START TEST setup.sh 00:11:59.609 ************************************ 00:11:59.609 09:05:11 setup.sh -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:59.609 * Looking for test storage... 00:11:59.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:59.609 09:05:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:59.609 09:05:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:59.609 09:05:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:59.609 09:05:11 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:59.609 09:05:11 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:59.609 09:05:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:59.609 ************************************ 00:11:59.609 START TEST acl 00:11:59.609 ************************************ 00:11:59.609 09:05:11 setup.sh.acl -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:59.609 * Looking for test storage... 00:11:59.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:59.609 09:05:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n2 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n2 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n3 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n3 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:59.609 09:05:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:11:59.609 09:05:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:59.609 09:05:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:59.609 09:05:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:59.609 09:05:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:59.609 09:05:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:59.609 09:05:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:59.609 09:05:12 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:00.544 09:05:12 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:12:00.544 09:05:12 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:12:00.544 09:05:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:00.544 09:05:12 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:12:00.544 09:05:12 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:12:00.544 09:05:12 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:01.479 Hugepages 00:12:01.479 node hugesize free / total 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:01.479 00:12:01.479 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:12:01.479 09:05:13 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:12:01.479 09:05:13 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:01.479 09:05:13 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:01.479 09:05:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:01.479 ************************************ 00:12:01.479 START TEST denied 00:12:01.479 ************************************ 00:12:01.479 09:05:13 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:12:01.479 09:05:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:12:01.479 09:05:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:12:01.479 09:05:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:12:01.479 09:05:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:12:01.479 09:05:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:02.413 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:02.413 09:05:14 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:03.348 00:12:03.348 real 0m1.631s 00:12:03.348 user 0m0.581s 00:12:03.348 sys 0m1.008s 00:12:03.348 09:05:15 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:03.348 ************************************ 00:12:03.348 END TEST denied 00:12:03.348 ************************************ 00:12:03.348 09:05:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:12:03.348 09:05:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:12:03.348 09:05:15 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:03.348 09:05:15 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:03.348 09:05:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:03.348 ************************************ 00:12:03.348 START TEST allowed 00:12:03.348 ************************************ 00:12:03.348 09:05:15 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:12:03.348 09:05:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:12:03.348 09:05:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:12:03.348 09:05:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:12:03.348 09:05:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:12:03.348 09:05:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:04.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:04.281 09:05:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:04.847 00:12:04.847 real 0m1.742s 00:12:04.847 user 0m0.725s 00:12:04.847 sys 0m1.012s 00:12:04.847 ************************************ 00:12:04.847 END TEST allowed 00:12:04.847 ************************************ 00:12:04.847 09:05:17 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:04.847 09:05:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:12:05.108 ************************************ 00:12:05.108 END TEST acl 00:12:05.108 ************************************ 00:12:05.108 00:12:05.108 real 0m5.392s 00:12:05.108 user 0m2.223s 00:12:05.108 sys 0m3.148s 00:12:05.108 09:05:17 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:05.108 09:05:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:05.108 09:05:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:05.108 09:05:17 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:05.108 09:05:17 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:05.108 09:05:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:05.108 ************************************ 00:12:05.108 START TEST hugepages 00:12:05.108 ************************************ 00:12:05.108 09:05:17 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:05.108 * Looking for test storage... 00:12:05.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 6074684 kB' 'MemAvailable: 7453776 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 426320 kB' 'Inactive: 1269380 kB' 'Active(anon): 105944 kB' 'Inactive(anon): 10704 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 106336 kB' 'Mapped: 49552 kB' 'Shmem: 10484 kB' 'KReclaimable: 73660 kB' 'Slab: 145428 kB' 'SReclaimable: 73660 kB' 'SUnreclaim: 71768 kB' 'KernelStack: 4768 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12407572 kB' 'Committed_AS: 335312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.108 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.109 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:05.110 09:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:05.111 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:05.373 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:05.373 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:05.373 09:05:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:12:05.373 09:05:17 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:05.373 09:05:17 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:05.373 09:05:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:05.373 ************************************ 00:12:05.373 START TEST default_setup 00:12:05.373 ************************************ 00:12:05.373 09:05:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:12:05.373 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:12:05.373 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:12:05.373 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:12:05.374 09:05:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:05.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.955 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.227 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.227 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8158196 kB' 'MemAvailable: 9537144 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 442188 kB' 'Inactive: 1269348 kB' 'Active(anon): 121812 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122076 kB' 'Mapped: 49340 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 144984 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71628 kB' 'KernelStack: 4704 kB' 'PageTables: 3388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.228 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8158196 kB' 'MemAvailable: 9537144 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441672 kB' 'Inactive: 1269348 kB' 'Active(anon): 121296 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121820 kB' 'Mapped: 49400 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 144980 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71624 kB' 'KernelStack: 4624 kB' 'PageTables: 3204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.229 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.230 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8158196 kB' 'MemAvailable: 9537144 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 442036 kB' 'Inactive: 1269348 kB' 'Active(anon): 121660 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121676 kB' 'Mapped: 49400 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 144980 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71624 kB' 'KernelStack: 4640 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.231 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.232 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:06.233 nr_hugepages=1024 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:06.233 resv_hugepages=0 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:06.233 surplus_hugepages=0 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:06.233 anon_hugepages=0 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8158196 kB' 'MemAvailable: 9537144 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 442008 kB' 'Inactive: 1269348 kB' 'Active(anon): 121632 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121648 kB' 'Mapped: 49400 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 144968 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71612 kB' 'KernelStack: 4624 kB' 'PageTables: 3204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.233 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.507 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8158196 kB' 'MemUsed: 4074048 kB' 'SwapCached: 0 kB' 'Active: 441696 kB' 'Inactive: 1269352 kB' 'Active(anon): 121320 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 49400 kB' 'AnonPages: 121616 kB' 'Shmem: 10468 kB' 'KernelStack: 4676 kB' 'PageTables: 3164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73356 kB' 'Slab: 144968 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.508 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.509 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:06.510 node0=1024 expecting 1024 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:06.510 00:12:06.510 real 0m1.216s 00:12:06.510 user 0m0.537s 00:12:06.510 sys 0m0.574s 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:06.510 09:05:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:12:06.510 ************************************ 00:12:06.510 END TEST default_setup 00:12:06.510 ************************************ 00:12:06.510 09:05:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:12:06.510 09:05:18 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:06.510 09:05:18 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:06.510 09:05:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:06.510 ************************************ 00:12:06.510 START TEST per_node_1G_alloc 00:12:06.510 ************************************ 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:06.510 09:05:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:06.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.779 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.779 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.058 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9205760 kB' 'MemAvailable: 10584716 kB' 'Buffers: 2436 kB' 'Cached: 1587104 kB' 'SwapCached: 0 kB' 'Active: 442368 kB' 'Inactive: 1269332 kB' 'Active(anon): 121992 kB' 'Inactive(anon): 10640 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122004 kB' 'Mapped: 49596 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 144992 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71636 kB' 'KernelStack: 4724 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.059 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9205760 kB' 'MemAvailable: 10584716 kB' 'Buffers: 2436 kB' 'Cached: 1587104 kB' 'SwapCached: 0 kB' 'Active: 442368 kB' 'Inactive: 1269332 kB' 'Active(anon): 121992 kB' 'Inactive(anon): 10640 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122264 kB' 'Mapped: 49596 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 144992 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71636 kB' 'KernelStack: 4724 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.060 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.061 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9205512 kB' 'MemAvailable: 10584468 kB' 'Buffers: 2436 kB' 'Cached: 1587104 kB' 'SwapCached: 0 kB' 'Active: 442268 kB' 'Inactive: 1269324 kB' 'Active(anon): 121892 kB' 'Inactive(anon): 10632 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122152 kB' 'Mapped: 49904 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 145020 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71664 kB' 'KernelStack: 4772 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.062 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:07.063 nr_hugepages=512 00:12:07.063 resv_hugepages=0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:07.063 surplus_hugepages=0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:07.063 anon_hugepages=0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.063 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9205512 kB' 'MemAvailable: 10584468 kB' 'Buffers: 2436 kB' 'Cached: 1587104 kB' 'SwapCached: 0 kB' 'Active: 442324 kB' 'Inactive: 1269324 kB' 'Active(anon): 121948 kB' 'Inactive(anon): 10632 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122484 kB' 'Mapped: 49644 kB' 'Shmem: 10468 kB' 'KReclaimable: 73356 kB' 'Slab: 145016 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71660 kB' 'KernelStack: 4740 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.064 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9205512 kB' 'MemUsed: 3026732 kB' 'SwapCached: 0 kB' 'Active: 442212 kB' 'Inactive: 1269312 kB' 'Active(anon): 121836 kB' 'Inactive(anon): 10624 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 49388 kB' 'AnonPages: 122268 kB' 'Shmem: 10468 kB' 'KernelStack: 4708 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73356 kB' 'Slab: 145012 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.065 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.066 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:07.067 node0=512 expecting 512 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:07.067 00:12:07.067 real 0m0.550s 00:12:07.067 user 0m0.259s 00:12:07.067 sys 0m0.330s 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:07.067 09:05:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:07.067 ************************************ 00:12:07.067 END TEST per_node_1G_alloc 00:12:07.067 ************************************ 00:12:07.067 09:05:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:12:07.067 09:05:19 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:07.067 09:05:19 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:07.067 09:05:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:07.067 ************************************ 00:12:07.067 START TEST even_2G_alloc 00:12:07.067 ************************************ 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:07.067 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:07.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:07.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154248 kB' 'MemAvailable: 9533208 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 442136 kB' 'Inactive: 1269364 kB' 'Active(anon): 121760 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121968 kB' 'Mapped: 49532 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145156 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 4740 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.643 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8153996 kB' 'MemAvailable: 9532956 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441740 kB' 'Inactive: 1269356 kB' 'Active(anon): 121364 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121832 kB' 'Mapped: 49404 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145160 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 4768 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.644 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.645 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8153996 kB' 'MemAvailable: 9532956 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441812 kB' 'Inactive: 1269356 kB' 'Active(anon): 121436 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121688 kB' 'Mapped: 49404 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145160 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 4768 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.646 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.647 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:07.648 nr_hugepages=1024 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:07.648 resv_hugepages=0 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:07.648 surplus_hugepages=0 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:07.648 anon_hugepages=0 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154248 kB' 'MemAvailable: 9533208 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441820 kB' 'Inactive: 1269356 kB' 'Active(anon): 121444 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121700 kB' 'Mapped: 49404 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145160 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 4768 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.648 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:12:07.649 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154248 kB' 'MemUsed: 4077996 kB' 'SwapCached: 0 kB' 'Active: 441828 kB' 'Inactive: 1269356 kB' 'Active(anon): 121452 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 49404 kB' 'AnonPages: 121708 kB' 'Shmem: 10464 kB' 'KernelStack: 4768 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73356 kB' 'Slab: 145160 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.650 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:07.651 node0=1024 expecting 1024 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:07.651 00:12:07.651 real 0m0.597s 00:12:07.651 user 0m0.284s 00:12:07.651 sys 0m0.348s 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:07.651 09:05:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:07.651 ************************************ 00:12:07.651 END TEST even_2G_alloc 00:12:07.651 ************************************ 00:12:07.651 09:05:20 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:12:07.651 09:05:20 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:07.651 09:05:20 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:07.651 09:05:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:07.910 ************************************ 00:12:07.910 START TEST odd_alloc 00:12:07.910 ************************************ 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:07.910 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:08.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:08.171 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.171 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154304 kB' 'MemAvailable: 9533264 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441960 kB' 'Inactive: 1269364 kB' 'Active(anon): 121584 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121824 kB' 'Mapped: 49400 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145060 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71704 kB' 'KernelStack: 4688 kB' 'PageTables: 3336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.171 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.172 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154304 kB' 'MemAvailable: 9533264 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441528 kB' 'Inactive: 1269356 kB' 'Active(anon): 121152 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121648 kB' 'Mapped: 49272 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145056 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71700 kB' 'KernelStack: 4704 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.173 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:08.174 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154556 kB' 'MemAvailable: 9533516 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441632 kB' 'Inactive: 1269356 kB' 'Active(anon): 121256 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121496 kB' 'Mapped: 49272 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145048 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 4672 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.175 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.176 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:08.437 nr_hugepages=1025 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:12:08.437 resv_hugepages=0 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:08.437 surplus_hugepages=0 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:08.437 anon_hugepages=0 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8154304 kB' 'MemAvailable: 9533264 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441656 kB' 'Inactive: 1269356 kB' 'Active(anon): 121280 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121784 kB' 'Mapped: 49272 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145048 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 4672 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.437 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.438 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8159992 kB' 'MemUsed: 4072252 kB' 'SwapCached: 0 kB' 'Active: 441636 kB' 'Inactive: 1269356 kB' 'Active(anon): 121260 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 49272 kB' 'AnonPages: 121760 kB' 'Shmem: 10464 kB' 'KernelStack: 4672 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73356 kB' 'Slab: 145048 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.439 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:08.440 node0=1025 expecting 1025 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:12:08.440 00:12:08.440 real 0m0.591s 00:12:08.440 user 0m0.298s 00:12:08.440 sys 0m0.326s 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:08.440 09:05:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:08.440 ************************************ 00:12:08.440 END TEST odd_alloc 00:12:08.440 ************************************ 00:12:08.440 09:05:20 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:12:08.440 09:05:20 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:08.440 09:05:20 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:08.440 09:05:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:08.440 ************************************ 00:12:08.440 START TEST custom_alloc 00:12:08.440 ************************************ 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:12:08.440 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:08.441 09:05:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:08.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:08.698 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.698 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.961 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9208684 kB' 'MemAvailable: 10587644 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 442416 kB' 'Inactive: 1269372 kB' 'Active(anon): 122040 kB' 'Inactive(anon): 10676 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122048 kB' 'Mapped: 49464 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145072 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71716 kB' 'KernelStack: 4796 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.962 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9208432 kB' 'MemAvailable: 10587392 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 442008 kB' 'Inactive: 1269364 kB' 'Active(anon): 121632 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121876 kB' 'Mapped: 49344 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145076 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71720 kB' 'KernelStack: 4700 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.963 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.964 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9208180 kB' 'MemAvailable: 10587140 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441920 kB' 'Inactive: 1269364 kB' 'Active(anon): 121544 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121788 kB' 'Mapped: 49344 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145072 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71716 kB' 'KernelStack: 4692 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.965 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.966 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:08.967 nr_hugepages=512 00:12:08.967 resv_hugepages=0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:08.967 surplus_hugepages=0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:08.967 anon_hugepages=0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9207932 kB' 'MemAvailable: 10586892 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 441832 kB' 'Inactive: 1269364 kB' 'Active(anon): 121456 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121700 kB' 'Mapped: 49344 kB' 'Shmem: 10464 kB' 'KReclaimable: 73356 kB' 'Slab: 145072 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71716 kB' 'KernelStack: 4692 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 351276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.967 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:08.968 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9207932 kB' 'MemUsed: 3024312 kB' 'SwapCached: 0 kB' 'Active: 442068 kB' 'Inactive: 1269364 kB' 'Active(anon): 121692 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 49344 kB' 'AnonPages: 121932 kB' 'Shmem: 10464 kB' 'KernelStack: 4692 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73356 kB' 'Slab: 145072 kB' 'SReclaimable: 73356 kB' 'SUnreclaim: 71716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.969 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:08.970 node0=512 expecting 512 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:08.970 00:12:08.970 real 0m0.604s 00:12:08.970 user 0m0.296s 00:12:08.970 sys 0m0.346s 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:08.970 09:05:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:08.970 ************************************ 00:12:08.970 END TEST custom_alloc 00:12:08.970 ************************************ 00:12:08.970 09:05:21 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:12:08.970 09:05:21 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:08.970 09:05:21 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:08.970 09:05:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:08.970 ************************************ 00:12:08.970 START TEST no_shrink_alloc 00:12:08.970 ************************************ 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:08.970 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:09.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.538 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.538 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8166972 kB' 'MemAvailable: 9545928 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437712 kB' 'Inactive: 1269372 kB' 'Active(anon): 117336 kB' 'Inactive(anon): 10676 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117724 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 73352 kB' 'Slab: 144908 kB' 'SReclaimable: 73352 kB' 'SUnreclaim: 71556 kB' 'KernelStack: 4660 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8166972 kB' 'MemAvailable: 9545928 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437780 kB' 'Inactive: 1269356 kB' 'Active(anon): 117404 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117696 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 73352 kB' 'Slab: 144880 kB' 'SReclaimable: 73352 kB' 'SUnreclaim: 71528 kB' 'KernelStack: 4580 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.538 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8166972 kB' 'MemAvailable: 9545928 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437696 kB' 'Inactive: 1269356 kB' 'Active(anon): 117320 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117612 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 73352 kB' 'Slab: 144880 kB' 'SReclaimable: 73352 kB' 'SUnreclaim: 71528 kB' 'KernelStack: 4564 kB' 'PageTables: 3136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.539 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:09.540 nr_hugepages=1024 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:09.540 resv_hugepages=0 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:09.540 surplus_hugepages=0 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:09.540 anon_hugepages=0 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:09.540 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:09.800 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8166972 kB' 'MemAvailable: 9545928 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437280 kB' 'Inactive: 1269356 kB' 'Active(anon): 116904 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117200 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 73352 kB' 'Slab: 144880 kB' 'SReclaimable: 73352 kB' 'SUnreclaim: 71528 kB' 'KernelStack: 4548 kB' 'PageTables: 3096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.801 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:09.802 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:09.803 09:05:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8166468 kB' 'MemUsed: 4065776 kB' 'SwapCached: 0 kB' 'Active: 437280 kB' 'Inactive: 1269356 kB' 'Active(anon): 116904 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 48476 kB' 'AnonPages: 117460 kB' 'Shmem: 10464 kB' 'KernelStack: 4616 kB' 'PageTables: 3096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73352 kB' 'Slab: 144880 kB' 'SReclaimable: 73352 kB' 'SUnreclaim: 71528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.803 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:09.804 node0=1024 expecting 1024 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:09.804 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:10.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:10.063 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:10.063 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:10.063 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:10.063 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8163200 kB' 'MemAvailable: 9542160 kB' 'Buffers: 2436 kB' 'Cached: 1587104 kB' 'SwapCached: 0 kB' 'Active: 438472 kB' 'Inactive: 1269360 kB' 'Active(anon): 118096 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118340 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 73352 kB' 'Slab: 144856 kB' 'SReclaimable: 73352 kB' 'SUnreclaim: 71504 kB' 'KernelStack: 4744 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8162948 kB' 'MemAvailable: 9541896 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437596 kB' 'Inactive: 1269356 kB' 'Active(anon): 117220 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117492 kB' 'Mapped: 48276 kB' 'Shmem: 10464 kB' 'KReclaimable: 73336 kB' 'Slab: 144768 kB' 'SReclaimable: 73336 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 4592 kB' 'PageTables: 2944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.064 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.065 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.326 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8162948 kB' 'MemAvailable: 9541896 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437612 kB' 'Inactive: 1269356 kB' 'Active(anon): 117236 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117508 kB' 'Mapped: 48276 kB' 'Shmem: 10464 kB' 'KReclaimable: 73336 kB' 'Slab: 144768 kB' 'SReclaimable: 73336 kB' 'SUnreclaim: 71432 kB' 'KernelStack: 4592 kB' 'PageTables: 2944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.327 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.328 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.329 nr_hugepages=1024 00:12:10.329 resv_hugepages=0 00:12:10.329 surplus_hugepages=0 00:12:10.329 anon_hugepages=0 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8162948 kB' 'MemAvailable: 9541896 kB' 'Buffers: 2436 kB' 'Cached: 1587100 kB' 'SwapCached: 0 kB' 'Active: 437544 kB' 'Inactive: 1269356 kB' 'Active(anon): 117168 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117404 kB' 'Mapped: 48276 kB' 'Shmem: 10464 kB' 'KReclaimable: 73336 kB' 'Slab: 144760 kB' 'SReclaimable: 73336 kB' 'SUnreclaim: 71424 kB' 'KernelStack: 4576 kB' 'PageTables: 2904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 7180288 kB' 'DirectMap1G: 7340032 kB' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.329 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.330 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8162948 kB' 'MemUsed: 4069296 kB' 'SwapCached: 0 kB' 'Active: 437268 kB' 'Inactive: 1269356 kB' 'Active(anon): 116892 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320376 kB' 'Inactive(file): 1258696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 1589536 kB' 'Mapped: 48276 kB' 'AnonPages: 117384 kB' 'Shmem: 10464 kB' 'KernelStack: 4576 kB' 'PageTables: 2904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73336 kB' 'Slab: 144760 kB' 'SReclaimable: 73336 kB' 'SUnreclaim: 71424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.331 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.332 node0=1024 expecting 1024 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:10.332 00:12:10.332 real 0m1.195s 00:12:10.332 user 0m0.549s 00:12:10.332 sys 0m0.702s 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:10.332 ************************************ 00:12:10.332 END TEST no_shrink_alloc 00:12:10.332 ************************************ 00:12:10.332 09:05:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:10.332 09:05:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:10.332 ************************************ 00:12:10.332 END TEST hugepages 00:12:10.332 ************************************ 00:12:10.332 00:12:10.332 real 0m5.260s 00:12:10.332 user 0m2.403s 00:12:10.332 sys 0m2.935s 00:12:10.332 09:05:22 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:10.332 09:05:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:10.332 09:05:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:10.332 09:05:22 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:10.332 09:05:22 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:10.332 09:05:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:10.332 ************************************ 00:12:10.332 START TEST driver 00:12:10.332 ************************************ 00:12:10.332 09:05:22 setup.sh.driver -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:10.591 * Looking for test storage... 00:12:10.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:10.591 09:05:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:12:10.591 09:05:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:10.591 09:05:22 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:11.157 09:05:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:11.157 09:05:23 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:11.157 09:05:23 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:11.157 09:05:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:11.157 ************************************ 00:12:11.157 START TEST guess_driver 00:12:11.157 ************************************ 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:12:11.157 insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:12:11.157 Looking for driver=uio_pci_generic 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:12:11.157 09:05:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:11.725 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:12:11.726 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:12:11.726 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:11.983 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:11.984 09:05:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:13.008 00:12:13.008 real 0m1.658s 00:12:13.008 user 0m0.581s 00:12:13.008 sys 0m1.105s 00:12:13.009 09:05:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:13.009 09:05:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 ************************************ 00:12:13.009 END TEST guess_driver 00:12:13.009 ************************************ 00:12:13.009 ************************************ 00:12:13.009 END TEST driver 00:12:13.009 ************************************ 00:12:13.009 00:12:13.009 real 0m2.485s 00:12:13.009 user 0m0.847s 00:12:13.009 sys 0m1.745s 00:12:13.009 09:05:25 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:13.009 09:05:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 09:05:25 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:13.009 09:05:25 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:13.009 09:05:25 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:13.009 09:05:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 ************************************ 00:12:13.009 START TEST devices 00:12:13.009 ************************************ 00:12:13.009 09:05:25 setup.sh.devices -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:13.009 * Looking for test storage... 00:12:13.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:13.009 09:05:25 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:13.009 09:05:25 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:12:13.009 09:05:25 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:13.009 09:05:25 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n3 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:13.944 09:05:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:12:13.944 No valid GPT data, bailing 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:13.944 09:05:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:13.944 09:05:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:12:13.944 No valid GPT data, bailing 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:12:13.944 09:05:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:12:13.944 09:05:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:13.944 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:12:13.944 09:05:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:12:14.201 No valid GPT data, bailing 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:12:14.201 09:05:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:12:14.201 09:05:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:12:14.201 09:05:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:14.201 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:12:14.201 No valid GPT data, bailing 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:14.201 09:05:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:12:14.202 09:05:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:12:14.202 09:05:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:12:14.202 09:05:26 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:14.202 09:05:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:14.202 09:05:26 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:14.202 09:05:26 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:14.202 09:05:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:14.202 ************************************ 00:12:14.202 START TEST nvme_mount 00:12:14.202 ************************************ 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:14.202 09:05:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:15.137 Creating new GPT entries in memory. 00:12:15.137 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:15.137 other utilities. 00:12:15.137 09:05:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:15.137 09:05:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:15.137 09:05:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:15.137 09:05:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:15.137 09:05:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:16.570 Creating new GPT entries in memory. 00:12:16.570 The operation has completed successfully. 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56259 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.570 09:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:16.827 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:16.828 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:17.086 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:17.086 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:17.086 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:17.344 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:17.344 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:17.344 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:17.344 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:17.344 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:17.345 09:05:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:17.603 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.603 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:17.603 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:17.603 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.603 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.603 09:05:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.603 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.603 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:17.862 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:17.863 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.863 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:17.863 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:17.863 09:05:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:17.863 09:05:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:18.121 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.121 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:18.121 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:18.121 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.121 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.121 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.379 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.379 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.379 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.379 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:18.637 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:18.637 00:12:18.637 real 0m4.316s 00:12:18.637 user 0m0.787s 00:12:18.637 sys 0m1.266s 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:18.637 09:05:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:12:18.637 ************************************ 00:12:18.637 END TEST nvme_mount 00:12:18.637 ************************************ 00:12:18.637 09:05:30 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:18.637 09:05:30 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:18.637 09:05:30 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:18.637 09:05:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:18.637 ************************************ 00:12:18.637 START TEST dm_mount 00:12:18.637 ************************************ 00:12:18.637 09:05:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:12:18.637 09:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:18.637 09:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:18.637 09:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:18.637 09:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:18.637 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:18.638 09:05:30 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:19.575 Creating new GPT entries in memory. 00:12:19.575 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:19.575 other utilities. 00:12:19.575 09:05:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:19.575 09:05:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:19.575 09:05:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:19.575 09:05:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:19.575 09:05:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:20.950 Creating new GPT entries in memory. 00:12:20.950 The operation has completed successfully. 00:12:20.950 09:05:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:20.950 09:05:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:20.950 09:05:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:20.950 09:05:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:20.950 09:05:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:12:21.887 The operation has completed successfully. 00:12:21.887 09:05:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:21.887 09:05:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:21.887 09:05:33 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 56692 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.887 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.145 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.145 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.145 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.145 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:22.404 09:05:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:22.664 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.664 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:22.664 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:22.664 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.664 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.664 09:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.664 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.664 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:22.923 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:22.923 00:12:22.923 real 0m4.391s 00:12:22.923 user 0m0.517s 00:12:22.923 sys 0m0.836s 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:22.923 09:05:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:12:22.923 ************************************ 00:12:22.923 END TEST dm_mount 00:12:22.923 ************************************ 00:12:22.923 09:05:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:12:22.923 09:05:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:12:22.923 09:05:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:22.923 09:05:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:22.923 09:05:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:23.181 09:05:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:23.181 09:05:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:23.440 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:23.440 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:23.440 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:23.440 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:23.440 09:05:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:23.440 00:12:23.440 real 0m10.435s 00:12:23.440 user 0m2.036s 00:12:23.440 sys 0m2.793s 00:12:23.440 09:05:35 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:23.440 09:05:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:23.440 ************************************ 00:12:23.440 END TEST devices 00:12:23.440 ************************************ 00:12:23.440 ************************************ 00:12:23.440 END TEST setup.sh 00:12:23.440 ************************************ 00:12:23.440 00:12:23.440 real 0m23.889s 00:12:23.440 user 0m7.625s 00:12:23.440 sys 0m10.816s 00:12:23.440 09:05:35 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:23.440 09:05:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:23.440 09:05:35 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:24.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:24.375 Hugepages 00:12:24.375 node hugesize free / total 00:12:24.375 node0 1048576kB 0 / 0 00:12:24.375 node0 2048kB 2048 / 2048 00:12:24.375 00:12:24.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:24.375 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:24.375 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:24.375 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:12:24.375 09:05:36 -- spdk/autotest.sh@130 -- # uname -s 00:12:24.375 09:05:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:12:24.375 09:05:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:12:24.375 09:05:36 -- common/autotest_common.sh@1528 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:25.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.571 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.571 09:05:37 -- common/autotest_common.sh@1529 -- # sleep 1 00:12:26.507 09:05:38 -- common/autotest_common.sh@1530 -- # bdfs=() 00:12:26.507 09:05:38 -- common/autotest_common.sh@1530 -- # local bdfs 00:12:26.507 09:05:38 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:12:26.507 09:05:38 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:12:26.507 09:05:38 -- common/autotest_common.sh@1510 -- # bdfs=() 00:12:26.507 09:05:38 -- common/autotest_common.sh@1510 -- # local bdfs 00:12:26.507 09:05:38 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:26.507 09:05:38 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:12:26.507 09:05:38 -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:26.507 09:05:38 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:12:26.507 09:05:38 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:26.507 09:05:38 -- common/autotest_common.sh@1533 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:27.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:27.073 Waiting for block devices as requested 00:12:27.073 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.333 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.333 09:05:39 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:12:27.333 09:05:39 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:12:27.333 09:05:39 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:27.333 09:05:39 -- common/autotest_common.sh@1499 -- # grep 0000:00:10.0/nvme/nvme 00:12:27.333 09:05:39 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme1 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # grep oacs 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # oacs=' 0x12a' 00:12:27.334 09:05:39 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:12:27.334 09:05:39 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:12:27.334 09:05:39 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1554 -- # continue 00:12:27.334 09:05:39 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:12:27.334 09:05:39 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:27.334 09:05:39 -- common/autotest_common.sh@1499 -- # grep 0000:00:11.0/nvme/nvme 00:12:27.334 09:05:39 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # grep oacs 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:12:27.334 09:05:39 -- common/autotest_common.sh@1542 -- # oacs=' 0x12a' 00:12:27.334 09:05:39 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:12:27.334 09:05:39 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:12:27.334 09:05:39 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:12:27.334 09:05:39 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:12:27.334 09:05:39 -- common/autotest_common.sh@1554 -- # continue 00:12:27.334 09:05:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:12:27.334 09:05:39 -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:27.334 09:05:39 -- common/autotest_common.sh@10 -- # set +x 00:12:27.334 09:05:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:12:27.334 09:05:39 -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:27.334 09:05:39 -- common/autotest_common.sh@10 -- # set +x 00:12:27.334 09:05:39 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:28.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.269 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:28.269 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:28.269 09:05:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:12:28.269 09:05:40 -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:28.269 09:05:40 -- common/autotest_common.sh@10 -- # set +x 00:12:28.527 09:05:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:12:28.527 09:05:40 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:12:28.527 09:05:40 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:12:28.527 09:05:40 -- common/autotest_common.sh@1574 -- # bdfs=() 00:12:28.527 09:05:40 -- common/autotest_common.sh@1574 -- # local bdfs 00:12:28.527 09:05:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:12:28.527 09:05:40 -- common/autotest_common.sh@1510 -- # bdfs=() 00:12:28.527 09:05:40 -- common/autotest_common.sh@1510 -- # local bdfs 00:12:28.527 09:05:40 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:28.527 09:05:40 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:12:28.527 09:05:40 -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:28.527 09:05:40 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:12:28.527 09:05:40 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:28.527 09:05:40 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:12:28.527 09:05:40 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:12:28.527 09:05:40 -- common/autotest_common.sh@1577 -- # device=0x0010 00:12:28.527 09:05:40 -- common/autotest_common.sh@1578 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:28.527 09:05:40 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:12:28.527 09:05:40 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:12:28.527 09:05:40 -- common/autotest_common.sh@1577 -- # device=0x0010 00:12:28.527 09:05:40 -- common/autotest_common.sh@1578 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:28.527 09:05:40 -- common/autotest_common.sh@1583 -- # printf '%s\n' 00:12:28.527 09:05:40 -- common/autotest_common.sh@1589 -- # [[ -z '' ]] 00:12:28.527 09:05:40 -- common/autotest_common.sh@1590 -- # return 0 00:12:28.527 09:05:40 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:12:28.527 09:05:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:12:28.527 09:05:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:28.527 09:05:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:28.527 09:05:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:12:28.527 09:05:40 -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:28.527 09:05:40 -- common/autotest_common.sh@10 -- # set +x 00:12:28.527 09:05:40 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:28.527 09:05:40 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:28.527 09:05:40 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:28.527 09:05:40 -- common/autotest_common.sh@10 -- # set +x 00:12:28.527 ************************************ 00:12:28.527 START TEST env 00:12:28.527 ************************************ 00:12:28.527 09:05:40 env -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:28.527 * Looking for test storage... 00:12:28.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:12:28.527 09:05:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:28.527 09:05:40 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:28.527 09:05:40 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:28.527 09:05:40 env -- common/autotest_common.sh@10 -- # set +x 00:12:28.527 ************************************ 00:12:28.527 START TEST env_memory 00:12:28.527 ************************************ 00:12:28.527 09:05:40 env.env_memory -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:28.527 00:12:28.527 00:12:28.527 CUnit - A unit testing framework for C - Version 2.1-3 00:12:28.527 http://cunit.sourceforge.net/ 00:12:28.527 00:12:28.527 00:12:28.527 Suite: memory 00:12:28.786 Test: alloc and free memory map ...[2024-05-15 09:05:41.003374] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:12:28.786 passed 00:12:28.786 Test: mem map translation ...[2024-05-15 09:05:41.037024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:12:28.786 [2024-05-15 09:05:41.037256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:12:28.786 [2024-05-15 09:05:41.037518] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:12:28.786 [2024-05-15 09:05:41.037745] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:12:28.786 passed 00:12:28.786 Test: mem map registration ...[2024-05-15 09:05:41.101818] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:12:28.786 [2024-05-15 09:05:41.102076] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:12:28.786 passed 00:12:28.786 Test: mem map adjacent registrations ...passed 00:12:28.786 00:12:28.786 Run Summary: Type Total Ran Passed Failed Inactive 00:12:28.786 suites 1 1 n/a 0 0 00:12:28.786 tests 4 4 4 0 0 00:12:28.786 asserts 152 152 152 0 n/a 00:12:28.786 00:12:28.786 Elapsed time = 0.217 seconds 00:12:28.786 00:12:28.786 real 0m0.234s 00:12:28.786 user 0m0.213s 00:12:28.786 sys 0m0.015s 00:12:28.786 09:05:41 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:28.786 09:05:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:12:28.786 ************************************ 00:12:28.786 END TEST env_memory 00:12:28.786 ************************************ 00:12:29.045 09:05:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:29.045 09:05:41 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:29.045 09:05:41 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:29.045 09:05:41 env -- common/autotest_common.sh@10 -- # set +x 00:12:29.045 ************************************ 00:12:29.045 START TEST env_vtophys 00:12:29.045 ************************************ 00:12:29.045 09:05:41 env.env_vtophys -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:29.045 EAL: lib.eal log level changed from notice to debug 00:12:29.045 EAL: Detected lcore 0 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 1 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 2 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 3 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 4 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 5 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 6 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 7 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 8 as core 0 on socket 0 00:12:29.045 EAL: Detected lcore 9 as core 0 on socket 0 00:12:29.045 EAL: Maximum logical cores by configuration: 128 00:12:29.045 EAL: Detected CPU lcores: 10 00:12:29.045 EAL: Detected NUMA nodes: 1 00:12:29.045 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:12:29.045 EAL: Detected shared linkage of DPDK 00:12:29.045 EAL: No shared files mode enabled, IPC will be disabled 00:12:29.045 EAL: Selected IOVA mode 'PA' 00:12:29.045 EAL: Probing VFIO support... 00:12:29.045 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:29.045 EAL: VFIO modules not loaded, skipping VFIO support... 00:12:29.045 EAL: Ask a virtual area of 0x2e000 bytes 00:12:29.045 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:12:29.045 EAL: Setting up physically contiguous memory... 00:12:29.045 EAL: Setting maximum number of open files to 524288 00:12:29.045 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:12:29.045 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:12:29.045 EAL: Ask a virtual area of 0x61000 bytes 00:12:29.045 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:12:29.045 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:29.045 EAL: Ask a virtual area of 0x400000000 bytes 00:12:29.045 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:12:29.046 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:12:29.046 EAL: Ask a virtual area of 0x61000 bytes 00:12:29.046 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:12:29.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:29.046 EAL: Ask a virtual area of 0x400000000 bytes 00:12:29.046 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:12:29.046 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:12:29.046 EAL: Ask a virtual area of 0x61000 bytes 00:12:29.046 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:12:29.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:29.046 EAL: Ask a virtual area of 0x400000000 bytes 00:12:29.046 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:12:29.046 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:12:29.046 EAL: Ask a virtual area of 0x61000 bytes 00:12:29.046 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:12:29.046 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:29.046 EAL: Ask a virtual area of 0x400000000 bytes 00:12:29.046 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:12:29.046 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:12:29.046 EAL: Hugepages will be freed exactly as allocated. 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: TSC frequency is ~2100000 KHz 00:12:29.046 EAL: Main lcore 0 is ready (tid=7f651328ca00;cpuset=[0]) 00:12:29.046 EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 0 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 2MB 00:12:29.046 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:29.046 EAL: No PCI address specified using 'addr=' in: bus=pci 00:12:29.046 EAL: Mem event callback 'spdk:(nil)' registered 00:12:29.046 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:12:29.046 00:12:29.046 00:12:29.046 CUnit - A unit testing framework for C - Version 2.1-3 00:12:29.046 http://cunit.sourceforge.net/ 00:12:29.046 00:12:29.046 00:12:29.046 Suite: components_suite 00:12:29.046 Test: vtophys_malloc_test ...passed 00:12:29.046 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 4 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 4MB 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was shrunk by 4MB 00:12:29.046 EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 4 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 6MB 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was shrunk by 6MB 00:12:29.046 EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 4 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 10MB 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was shrunk by 10MB 00:12:29.046 EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 4 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 18MB 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was shrunk by 18MB 00:12:29.046 EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 4 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 34MB 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was shrunk by 34MB 00:12:29.046 EAL: Trying to obtain current memory policy. 00:12:29.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.046 EAL: Restoring previous memory policy: 4 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.046 EAL: request: mp_malloc_sync 00:12:29.046 EAL: No shared files mode enabled, IPC is disabled 00:12:29.046 EAL: Heap on socket 0 was expanded by 66MB 00:12:29.046 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.305 EAL: request: mp_malloc_sync 00:12:29.305 EAL: No shared files mode enabled, IPC is disabled 00:12:29.305 EAL: Heap on socket 0 was shrunk by 66MB 00:12:29.305 EAL: Trying to obtain current memory policy. 00:12:29.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.305 EAL: Restoring previous memory policy: 4 00:12:29.305 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.305 EAL: request: mp_malloc_sync 00:12:29.305 EAL: No shared files mode enabled, IPC is disabled 00:12:29.305 EAL: Heap on socket 0 was expanded by 130MB 00:12:29.305 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.305 EAL: request: mp_malloc_sync 00:12:29.305 EAL: No shared files mode enabled, IPC is disabled 00:12:29.305 EAL: Heap on socket 0 was shrunk by 130MB 00:12:29.305 EAL: Trying to obtain current memory policy. 00:12:29.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.305 EAL: Restoring previous memory policy: 4 00:12:29.305 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.305 EAL: request: mp_malloc_sync 00:12:29.305 EAL: No shared files mode enabled, IPC is disabled 00:12:29.305 EAL: Heap on socket 0 was expanded by 258MB 00:12:29.305 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.305 EAL: request: mp_malloc_sync 00:12:29.305 EAL: No shared files mode enabled, IPC is disabled 00:12:29.305 EAL: Heap on socket 0 was shrunk by 258MB 00:12:29.305 EAL: Trying to obtain current memory policy. 00:12:29.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.564 EAL: Restoring previous memory policy: 4 00:12:29.564 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.564 EAL: request: mp_malloc_sync 00:12:29.564 EAL: No shared files mode enabled, IPC is disabled 00:12:29.564 EAL: Heap on socket 0 was expanded by 514MB 00:12:29.564 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.564 EAL: request: mp_malloc_sync 00:12:29.564 EAL: No shared files mode enabled, IPC is disabled 00:12:29.564 EAL: Heap on socket 0 was shrunk by 514MB 00:12:29.564 EAL: Trying to obtain current memory policy. 00:12:29.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:29.823 EAL: Restoring previous memory policy: 4 00:12:29.823 EAL: Calling mem event callback 'spdk:(nil)' 00:12:29.823 EAL: request: mp_malloc_sync 00:12:29.823 EAL: No shared files mode enabled, IPC is disabled 00:12:29.823 EAL: Heap on socket 0 was expanded by 1026MB 00:12:30.083 EAL: Calling mem event callback 'spdk:(nil)' 00:12:30.083 EAL: request: mp_malloc_sync 00:12:30.083 EAL: No shared files mode enabled, IPC is disabled 00:12:30.083 passed 00:12:30.083 00:12:30.083 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.083 suites 1 1 n/a 0 0 00:12:30.083 tests 2 2 2 0 0 00:12:30.083 asserts 6457 6457 6457 0 n/a 00:12:30.083 00:12:30.083 Elapsed time = 1.009 seconds 00:12:30.083 EAL: Heap on socket 0 was shrunk by 1026MB 00:12:30.083 EAL: Calling mem event callback 'spdk:(nil)' 00:12:30.083 EAL: request: mp_malloc_sync 00:12:30.083 EAL: No shared files mode enabled, IPC is disabled 00:12:30.083 EAL: Heap on socket 0 was shrunk by 2MB 00:12:30.083 EAL: No shared files mode enabled, IPC is disabled 00:12:30.083 EAL: No shared files mode enabled, IPC is disabled 00:12:30.083 EAL: No shared files mode enabled, IPC is disabled 00:12:30.083 ************************************ 00:12:30.083 END TEST env_vtophys 00:12:30.083 ************************************ 00:12:30.083 00:12:30.083 real 0m1.228s 00:12:30.083 user 0m0.653s 00:12:30.083 sys 0m0.426s 00:12:30.083 09:05:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:30.083 09:05:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:12:30.342 09:05:42 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:30.342 09:05:42 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:30.342 09:05:42 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:30.342 09:05:42 env -- common/autotest_common.sh@10 -- # set +x 00:12:30.342 ************************************ 00:12:30.342 START TEST env_pci 00:12:30.342 ************************************ 00:12:30.342 09:05:42 env.env_pci -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:30.342 00:12:30.342 00:12:30.342 CUnit - A unit testing framework for C - Version 2.1-3 00:12:30.342 http://cunit.sourceforge.net/ 00:12:30.342 00:12:30.342 00:12:30.342 Suite: pci 00:12:30.342 Test: pci_hook ...[2024-05-15 09:05:42.564519] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57885 has claimed it 00:12:30.342 EAL: Cannot find device (10000:00:01.0) 00:12:30.342 EAL: Failed to attach device on primary process 00:12:30.342 passed 00:12:30.342 00:12:30.342 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.342 suites 1 1 n/a 0 0 00:12:30.342 tests 1 1 1 0 0 00:12:30.342 asserts 25 25 25 0 n/a 00:12:30.342 00:12:30.342 Elapsed time = 0.003 seconds 00:12:30.342 00:12:30.342 real 0m0.028s 00:12:30.342 user 0m0.013s 00:12:30.342 sys 0m0.013s 00:12:30.342 09:05:42 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:30.342 09:05:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:12:30.342 ************************************ 00:12:30.342 END TEST env_pci 00:12:30.342 ************************************ 00:12:30.342 09:05:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:12:30.342 09:05:42 env -- env/env.sh@15 -- # uname 00:12:30.342 09:05:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:12:30.342 09:05:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:12:30.342 09:05:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:30.342 09:05:42 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:12:30.342 09:05:42 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:30.342 09:05:42 env -- common/autotest_common.sh@10 -- # set +x 00:12:30.342 ************************************ 00:12:30.342 START TEST env_dpdk_post_init 00:12:30.342 ************************************ 00:12:30.342 09:05:42 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:30.342 EAL: Detected CPU lcores: 10 00:12:30.342 EAL: Detected NUMA nodes: 1 00:12:30.342 EAL: Detected shared linkage of DPDK 00:12:30.342 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:30.342 EAL: Selected IOVA mode 'PA' 00:12:30.342 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:30.601 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:12:30.601 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:12:30.601 Starting DPDK initialization... 00:12:30.601 Starting SPDK post initialization... 00:12:30.601 SPDK NVMe probe 00:12:30.601 Attaching to 0000:00:10.0 00:12:30.601 Attaching to 0000:00:11.0 00:12:30.601 Attached to 0000:00:10.0 00:12:30.601 Attached to 0000:00:11.0 00:12:30.601 Cleaning up... 00:12:30.601 00:12:30.601 real 0m0.196s 00:12:30.601 user 0m0.044s 00:12:30.601 sys 0m0.050s 00:12:30.601 09:05:42 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:30.601 ************************************ 00:12:30.601 END TEST env_dpdk_post_init 00:12:30.601 ************************************ 00:12:30.601 09:05:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:12:30.601 09:05:42 env -- env/env.sh@26 -- # uname 00:12:30.601 09:05:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:12:30.601 09:05:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:30.601 09:05:42 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:30.601 09:05:42 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:30.601 09:05:42 env -- common/autotest_common.sh@10 -- # set +x 00:12:30.601 ************************************ 00:12:30.601 START TEST env_mem_callbacks 00:12:30.601 ************************************ 00:12:30.601 09:05:42 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:30.602 EAL: Detected CPU lcores: 10 00:12:30.602 EAL: Detected NUMA nodes: 1 00:12:30.602 EAL: Detected shared linkage of DPDK 00:12:30.602 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:30.602 EAL: Selected IOVA mode 'PA' 00:12:30.602 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:30.602 00:12:30.602 00:12:30.602 CUnit - A unit testing framework for C - Version 2.1-3 00:12:30.602 http://cunit.sourceforge.net/ 00:12:30.602 00:12:30.602 00:12:30.602 Suite: memory 00:12:30.602 Test: test ... 00:12:30.602 register 0x200000200000 2097152 00:12:30.602 malloc 3145728 00:12:30.602 register 0x200000400000 4194304 00:12:30.602 buf 0x200000500000 len 3145728 PASSED 00:12:30.602 malloc 64 00:12:30.602 buf 0x2000004fff40 len 64 PASSED 00:12:30.602 malloc 4194304 00:12:30.602 register 0x200000800000 6291456 00:12:30.602 buf 0x200000a00000 len 4194304 PASSED 00:12:30.602 free 0x200000500000 3145728 00:12:30.602 free 0x2000004fff40 64 00:12:30.602 unregister 0x200000400000 4194304 PASSED 00:12:30.602 free 0x200000a00000 4194304 00:12:30.602 unregister 0x200000800000 6291456 PASSED 00:12:30.602 malloc 8388608 00:12:30.602 register 0x200000400000 10485760 00:12:30.602 buf 0x200000600000 len 8388608 PASSED 00:12:30.602 free 0x200000600000 8388608 00:12:30.602 unregister 0x200000400000 10485760 PASSED 00:12:30.602 passed 00:12:30.602 00:12:30.602 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.602 suites 1 1 n/a 0 0 00:12:30.602 tests 1 1 1 0 0 00:12:30.602 asserts 15 15 15 0 n/a 00:12:30.602 00:12:30.602 Elapsed time = 0.008 seconds 00:12:30.602 00:12:30.602 real 0m0.159s 00:12:30.602 user 0m0.018s 00:12:30.602 sys 0m0.035s 00:12:30.602 09:05:43 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:30.602 09:05:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:12:30.602 ************************************ 00:12:30.602 END TEST env_mem_callbacks 00:12:30.602 ************************************ 00:12:30.859 ************************************ 00:12:30.859 END TEST env 00:12:30.859 ************************************ 00:12:30.859 00:12:30.859 real 0m2.238s 00:12:30.859 user 0m1.055s 00:12:30.859 sys 0m0.796s 00:12:30.859 09:05:43 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:30.859 09:05:43 env -- common/autotest_common.sh@10 -- # set +x 00:12:30.859 09:05:43 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:30.859 09:05:43 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:30.859 09:05:43 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:30.859 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:12:30.859 ************************************ 00:12:30.859 START TEST rpc 00:12:30.859 ************************************ 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:30.859 * Looking for test storage... 00:12:30.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:30.859 09:05:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58000 00:12:30.859 09:05:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:12:30.859 09:05:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:30.859 09:05:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58000 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@828 -- # '[' -z 58000 ']' 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:30.859 09:05:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.116 [2024-05-15 09:05:43.321560] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:31.116 [2024-05-15 09:05:43.321891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58000 ] 00:12:31.116 [2024-05-15 09:05:43.468796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.374 [2024-05-15 09:05:43.587309] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:12:31.374 [2024-05-15 09:05:43.587601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58000' to capture a snapshot of events at runtime. 00:12:31.374 [2024-05-15 09:05:43.587810] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.374 [2024-05-15 09:05:43.587886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.374 [2024-05-15 09:05:43.587928] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58000 for offline analysis/debug. 00:12:31.374 [2024-05-15 09:05:43.588023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.941 09:05:44 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:31.941 09:05:44 rpc -- common/autotest_common.sh@861 -- # return 0 00:12:31.941 09:05:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:31.941 09:05:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:31.941 09:05:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:12:31.941 09:05:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:12:31.941 09:05:44 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:31.941 09:05:44 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:31.941 09:05:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 ************************************ 00:12:31.941 START TEST rpc_integrity 00:12:31.941 ************************************ 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:31.941 { 00:12:31.941 "name": "Malloc0", 00:12:31.941 "aliases": [ 00:12:31.941 "66032f43-89e8-477d-b420-9adf82a319f0" 00:12:31.941 ], 00:12:31.941 "product_name": "Malloc disk", 00:12:31.941 "block_size": 512, 00:12:31.941 "num_blocks": 16384, 00:12:31.941 "uuid": "66032f43-89e8-477d-b420-9adf82a319f0", 00:12:31.941 "assigned_rate_limits": { 00:12:31.941 "rw_ios_per_sec": 0, 00:12:31.941 "rw_mbytes_per_sec": 0, 00:12:31.941 "r_mbytes_per_sec": 0, 00:12:31.941 "w_mbytes_per_sec": 0 00:12:31.941 }, 00:12:31.941 "claimed": false, 00:12:31.941 "zoned": false, 00:12:31.941 "supported_io_types": { 00:12:31.941 "read": true, 00:12:31.941 "write": true, 00:12:31.941 "unmap": true, 00:12:31.941 "write_zeroes": true, 00:12:31.941 "flush": true, 00:12:31.941 "reset": true, 00:12:31.941 "compare": false, 00:12:31.941 "compare_and_write": false, 00:12:31.941 "abort": true, 00:12:31.941 "nvme_admin": false, 00:12:31.941 "nvme_io": false 00:12:31.941 }, 00:12:31.941 "memory_domains": [ 00:12:31.941 { 00:12:31.941 "dma_device_id": "system", 00:12:31.941 "dma_device_type": 1 00:12:31.941 }, 00:12:31.941 { 00:12:31.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.941 "dma_device_type": 2 00:12:31.941 } 00:12:31.941 ], 00:12:31.941 "driver_specific": {} 00:12:31.941 } 00:12:31.941 ]' 00:12:31.941 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:32.199 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 [2024-05-15 09:05:44.422043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:12:32.200 [2024-05-15 09:05:44.422241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.200 [2024-05-15 09:05:44.422297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db1ab0 00:12:32.200 [2024-05-15 09:05:44.422406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.200 [2024-05-15 09:05:44.423988] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.200 [2024-05-15 09:05:44.424133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:32.200 Passthru0 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:32.200 { 00:12:32.200 "name": "Malloc0", 00:12:32.200 "aliases": [ 00:12:32.200 "66032f43-89e8-477d-b420-9adf82a319f0" 00:12:32.200 ], 00:12:32.200 "product_name": "Malloc disk", 00:12:32.200 "block_size": 512, 00:12:32.200 "num_blocks": 16384, 00:12:32.200 "uuid": "66032f43-89e8-477d-b420-9adf82a319f0", 00:12:32.200 "assigned_rate_limits": { 00:12:32.200 "rw_ios_per_sec": 0, 00:12:32.200 "rw_mbytes_per_sec": 0, 00:12:32.200 "r_mbytes_per_sec": 0, 00:12:32.200 "w_mbytes_per_sec": 0 00:12:32.200 }, 00:12:32.200 "claimed": true, 00:12:32.200 "claim_type": "exclusive_write", 00:12:32.200 "zoned": false, 00:12:32.200 "supported_io_types": { 00:12:32.200 "read": true, 00:12:32.200 "write": true, 00:12:32.200 "unmap": true, 00:12:32.200 "write_zeroes": true, 00:12:32.200 "flush": true, 00:12:32.200 "reset": true, 00:12:32.200 "compare": false, 00:12:32.200 "compare_and_write": false, 00:12:32.200 "abort": true, 00:12:32.200 "nvme_admin": false, 00:12:32.200 "nvme_io": false 00:12:32.200 }, 00:12:32.200 "memory_domains": [ 00:12:32.200 { 00:12:32.200 "dma_device_id": "system", 00:12:32.200 "dma_device_type": 1 00:12:32.200 }, 00:12:32.200 { 00:12:32.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.200 "dma_device_type": 2 00:12:32.200 } 00:12:32.200 ], 00:12:32.200 "driver_specific": {} 00:12:32.200 }, 00:12:32.200 { 00:12:32.200 "name": "Passthru0", 00:12:32.200 "aliases": [ 00:12:32.200 "61f6d20e-6ad1-5590-9dac-68cd0fa84b5d" 00:12:32.200 ], 00:12:32.200 "product_name": "passthru", 00:12:32.200 "block_size": 512, 00:12:32.200 "num_blocks": 16384, 00:12:32.200 "uuid": "61f6d20e-6ad1-5590-9dac-68cd0fa84b5d", 00:12:32.200 "assigned_rate_limits": { 00:12:32.200 "rw_ios_per_sec": 0, 00:12:32.200 "rw_mbytes_per_sec": 0, 00:12:32.200 "r_mbytes_per_sec": 0, 00:12:32.200 "w_mbytes_per_sec": 0 00:12:32.200 }, 00:12:32.200 "claimed": false, 00:12:32.200 "zoned": false, 00:12:32.200 "supported_io_types": { 00:12:32.200 "read": true, 00:12:32.200 "write": true, 00:12:32.200 "unmap": true, 00:12:32.200 "write_zeroes": true, 00:12:32.200 "flush": true, 00:12:32.200 "reset": true, 00:12:32.200 "compare": false, 00:12:32.200 "compare_and_write": false, 00:12:32.200 "abort": true, 00:12:32.200 "nvme_admin": false, 00:12:32.200 "nvme_io": false 00:12:32.200 }, 00:12:32.200 "memory_domains": [ 00:12:32.200 { 00:12:32.200 "dma_device_id": "system", 00:12:32.200 "dma_device_type": 1 00:12:32.200 }, 00:12:32.200 { 00:12:32.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.200 "dma_device_type": 2 00:12:32.200 } 00:12:32.200 ], 00:12:32.200 "driver_specific": { 00:12:32.200 "passthru": { 00:12:32.200 "name": "Passthru0", 00:12:32.200 "base_bdev_name": "Malloc0" 00:12:32.200 } 00:12:32.200 } 00:12:32.200 } 00:12:32.200 ]' 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:32.200 09:05:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:32.200 00:12:32.200 real 0m0.325s 00:12:32.200 user 0m0.186s 00:12:32.200 sys 0m0.056s 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 ************************************ 00:12:32.200 END TEST rpc_integrity 00:12:32.200 ************************************ 00:12:32.200 09:05:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:12:32.200 09:05:44 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:32.200 09:05:44 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:32.200 09:05:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 ************************************ 00:12:32.200 START TEST rpc_plugins 00:12:32.200 ************************************ 00:12:32.200 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:12:32.200 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:12:32.200 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.200 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.458 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:12:32.458 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:12:32.458 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.458 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.458 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:12:32.458 { 00:12:32.458 "name": "Malloc1", 00:12:32.458 "aliases": [ 00:12:32.458 "1d750d52-a2cd-495f-a516-49f83991a939" 00:12:32.458 ], 00:12:32.458 "product_name": "Malloc disk", 00:12:32.458 "block_size": 4096, 00:12:32.458 "num_blocks": 256, 00:12:32.458 "uuid": "1d750d52-a2cd-495f-a516-49f83991a939", 00:12:32.458 "assigned_rate_limits": { 00:12:32.458 "rw_ios_per_sec": 0, 00:12:32.458 "rw_mbytes_per_sec": 0, 00:12:32.458 "r_mbytes_per_sec": 0, 00:12:32.458 "w_mbytes_per_sec": 0 00:12:32.458 }, 00:12:32.458 "claimed": false, 00:12:32.458 "zoned": false, 00:12:32.458 "supported_io_types": { 00:12:32.458 "read": true, 00:12:32.458 "write": true, 00:12:32.458 "unmap": true, 00:12:32.458 "write_zeroes": true, 00:12:32.458 "flush": true, 00:12:32.458 "reset": true, 00:12:32.458 "compare": false, 00:12:32.458 "compare_and_write": false, 00:12:32.458 "abort": true, 00:12:32.458 "nvme_admin": false, 00:12:32.458 "nvme_io": false 00:12:32.458 }, 00:12:32.458 "memory_domains": [ 00:12:32.458 { 00:12:32.458 "dma_device_id": "system", 00:12:32.458 "dma_device_type": 1 00:12:32.458 }, 00:12:32.458 { 00:12:32.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.458 "dma_device_type": 2 00:12:32.458 } 00:12:32.458 ], 00:12:32.458 "driver_specific": {} 00:12:32.458 } 00:12:32.458 ]' 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:12:32.459 09:05:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:12:32.459 00:12:32.459 real 0m0.145s 00:12:32.459 user 0m0.081s 00:12:32.459 sys 0m0.023s 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:32.459 ************************************ 00:12:32.459 09:05:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:32.459 END TEST rpc_plugins 00:12:32.459 ************************************ 00:12:32.459 09:05:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:12:32.459 09:05:44 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:32.459 09:05:44 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:32.459 09:05:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.459 ************************************ 00:12:32.459 START TEST rpc_trace_cmd_test 00:12:32.459 ************************************ 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:12:32.459 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58000", 00:12:32.459 "tpoint_group_mask": "0x8", 00:12:32.459 "iscsi_conn": { 00:12:32.459 "mask": "0x2", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "scsi": { 00:12:32.459 "mask": "0x4", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "bdev": { 00:12:32.459 "mask": "0x8", 00:12:32.459 "tpoint_mask": "0xffffffffffffffff" 00:12:32.459 }, 00:12:32.459 "nvmf_rdma": { 00:12:32.459 "mask": "0x10", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "nvmf_tcp": { 00:12:32.459 "mask": "0x20", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "ftl": { 00:12:32.459 "mask": "0x40", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "blobfs": { 00:12:32.459 "mask": "0x80", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "dsa": { 00:12:32.459 "mask": "0x200", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "thread": { 00:12:32.459 "mask": "0x400", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "nvme_pcie": { 00:12:32.459 "mask": "0x800", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "iaa": { 00:12:32.459 "mask": "0x1000", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "nvme_tcp": { 00:12:32.459 "mask": "0x2000", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "bdev_nvme": { 00:12:32.459 "mask": "0x4000", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 }, 00:12:32.459 "sock": { 00:12:32.459 "mask": "0x8000", 00:12:32.459 "tpoint_mask": "0x0" 00:12:32.459 } 00:12:32.459 }' 00:12:32.459 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:12:32.756 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:12:32.756 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:12:32.756 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:12:32.756 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:12:32.756 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:12:32.756 09:05:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:12:32.756 09:05:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:12:32.756 09:05:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:12:32.756 09:05:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:12:32.756 00:12:32.756 real 0m0.238s 00:12:32.756 user 0m0.188s 00:12:32.756 sys 0m0.039s 00:12:32.756 09:05:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:32.756 ************************************ 00:12:32.756 END TEST rpc_trace_cmd_test 00:12:32.756 ************************************ 00:12:32.756 09:05:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.756 09:05:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:12:32.756 09:05:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:12:32.756 09:05:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:12:32.756 09:05:45 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:32.756 09:05:45 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:32.756 09:05:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.756 ************************************ 00:12:32.756 START TEST rpc_daemon_integrity 00:12:32.756 ************************************ 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:32.756 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:33.018 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:33.019 { 00:12:33.019 "name": "Malloc2", 00:12:33.019 "aliases": [ 00:12:33.019 "d9d24562-d245-48b8-bafe-4c2b4dd66404" 00:12:33.019 ], 00:12:33.019 "product_name": "Malloc disk", 00:12:33.019 "block_size": 512, 00:12:33.019 "num_blocks": 16384, 00:12:33.019 "uuid": "d9d24562-d245-48b8-bafe-4c2b4dd66404", 00:12:33.019 "assigned_rate_limits": { 00:12:33.019 "rw_ios_per_sec": 0, 00:12:33.019 "rw_mbytes_per_sec": 0, 00:12:33.019 "r_mbytes_per_sec": 0, 00:12:33.019 "w_mbytes_per_sec": 0 00:12:33.019 }, 00:12:33.019 "claimed": false, 00:12:33.019 "zoned": false, 00:12:33.019 "supported_io_types": { 00:12:33.019 "read": true, 00:12:33.019 "write": true, 00:12:33.019 "unmap": true, 00:12:33.019 "write_zeroes": true, 00:12:33.019 "flush": true, 00:12:33.019 "reset": true, 00:12:33.019 "compare": false, 00:12:33.019 "compare_and_write": false, 00:12:33.019 "abort": true, 00:12:33.019 "nvme_admin": false, 00:12:33.019 "nvme_io": false 00:12:33.019 }, 00:12:33.019 "memory_domains": [ 00:12:33.019 { 00:12:33.019 "dma_device_id": "system", 00:12:33.019 "dma_device_type": 1 00:12:33.019 }, 00:12:33.019 { 00:12:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.019 "dma_device_type": 2 00:12:33.019 } 00:12:33.019 ], 00:12:33.019 "driver_specific": {} 00:12:33.019 } 00:12:33.019 ]' 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 [2024-05-15 09:05:45.286327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:12:33.019 [2024-05-15 09:05:45.286378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.019 [2024-05-15 09:05:45.286398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e11590 00:12:33.019 [2024-05-15 09:05:45.286408] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.019 [2024-05-15 09:05:45.287800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.019 [2024-05-15 09:05:45.287836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:33.019 Passthru0 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:33.019 { 00:12:33.019 "name": "Malloc2", 00:12:33.019 "aliases": [ 00:12:33.019 "d9d24562-d245-48b8-bafe-4c2b4dd66404" 00:12:33.019 ], 00:12:33.019 "product_name": "Malloc disk", 00:12:33.019 "block_size": 512, 00:12:33.019 "num_blocks": 16384, 00:12:33.019 "uuid": "d9d24562-d245-48b8-bafe-4c2b4dd66404", 00:12:33.019 "assigned_rate_limits": { 00:12:33.019 "rw_ios_per_sec": 0, 00:12:33.019 "rw_mbytes_per_sec": 0, 00:12:33.019 "r_mbytes_per_sec": 0, 00:12:33.019 "w_mbytes_per_sec": 0 00:12:33.019 }, 00:12:33.019 "claimed": true, 00:12:33.019 "claim_type": "exclusive_write", 00:12:33.019 "zoned": false, 00:12:33.019 "supported_io_types": { 00:12:33.019 "read": true, 00:12:33.019 "write": true, 00:12:33.019 "unmap": true, 00:12:33.019 "write_zeroes": true, 00:12:33.019 "flush": true, 00:12:33.019 "reset": true, 00:12:33.019 "compare": false, 00:12:33.019 "compare_and_write": false, 00:12:33.019 "abort": true, 00:12:33.019 "nvme_admin": false, 00:12:33.019 "nvme_io": false 00:12:33.019 }, 00:12:33.019 "memory_domains": [ 00:12:33.019 { 00:12:33.019 "dma_device_id": "system", 00:12:33.019 "dma_device_type": 1 00:12:33.019 }, 00:12:33.019 { 00:12:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.019 "dma_device_type": 2 00:12:33.019 } 00:12:33.019 ], 00:12:33.019 "driver_specific": {} 00:12:33.019 }, 00:12:33.019 { 00:12:33.019 "name": "Passthru0", 00:12:33.019 "aliases": [ 00:12:33.019 "be59a5ea-7cb8-5c93-ac75-da320f56ce16" 00:12:33.019 ], 00:12:33.019 "product_name": "passthru", 00:12:33.019 "block_size": 512, 00:12:33.019 "num_blocks": 16384, 00:12:33.019 "uuid": "be59a5ea-7cb8-5c93-ac75-da320f56ce16", 00:12:33.019 "assigned_rate_limits": { 00:12:33.019 "rw_ios_per_sec": 0, 00:12:33.019 "rw_mbytes_per_sec": 0, 00:12:33.019 "r_mbytes_per_sec": 0, 00:12:33.019 "w_mbytes_per_sec": 0 00:12:33.019 }, 00:12:33.019 "claimed": false, 00:12:33.019 "zoned": false, 00:12:33.019 "supported_io_types": { 00:12:33.019 "read": true, 00:12:33.019 "write": true, 00:12:33.019 "unmap": true, 00:12:33.019 "write_zeroes": true, 00:12:33.019 "flush": true, 00:12:33.019 "reset": true, 00:12:33.019 "compare": false, 00:12:33.019 "compare_and_write": false, 00:12:33.019 "abort": true, 00:12:33.019 "nvme_admin": false, 00:12:33.019 "nvme_io": false 00:12:33.019 }, 00:12:33.019 "memory_domains": [ 00:12:33.019 { 00:12:33.019 "dma_device_id": "system", 00:12:33.019 "dma_device_type": 1 00:12:33.019 }, 00:12:33.019 { 00:12:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.019 "dma_device_type": 2 00:12:33.019 } 00:12:33.019 ], 00:12:33.019 "driver_specific": { 00:12:33.019 "passthru": { 00:12:33.019 "name": "Passthru0", 00:12:33.019 "base_bdev_name": "Malloc2" 00:12:33.019 } 00:12:33.019 } 00:12:33.019 } 00:12:33.019 ]' 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:33.019 00:12:33.019 real 0m0.319s 00:12:33.019 user 0m0.191s 00:12:33.019 sys 0m0.060s 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:33.019 09:05:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:33.019 ************************************ 00:12:33.019 END TEST rpc_daemon_integrity 00:12:33.019 ************************************ 00:12:33.287 09:05:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:12:33.287 09:05:45 rpc -- rpc/rpc.sh@84 -- # killprocess 58000 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@947 -- # '[' -z 58000 ']' 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@951 -- # kill -0 58000 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@952 -- # uname 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58000 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:33.287 killing process with pid 58000 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58000' 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@966 -- # kill 58000 00:12:33.287 09:05:45 rpc -- common/autotest_common.sh@971 -- # wait 58000 00:12:33.546 ************************************ 00:12:33.546 END TEST rpc 00:12:33.546 ************************************ 00:12:33.546 00:12:33.546 real 0m2.760s 00:12:33.546 user 0m3.466s 00:12:33.546 sys 0m0.739s 00:12:33.546 09:05:45 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:33.546 09:05:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.546 09:05:45 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:12:33.546 09:05:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:33.546 09:05:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:33.546 09:05:45 -- common/autotest_common.sh@10 -- # set +x 00:12:33.546 ************************************ 00:12:33.546 START TEST skip_rpc 00:12:33.546 ************************************ 00:12:33.546 09:05:45 skip_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:12:33.805 * Looking for test storage... 00:12:33.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:33.805 09:05:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:33.805 09:05:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:33.805 09:05:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:12:33.805 09:05:46 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:33.805 09:05:46 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:33.805 09:05:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.805 ************************************ 00:12:33.805 START TEST skip_rpc 00:12:33.805 ************************************ 00:12:33.805 09:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:12:33.805 09:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58198 00:12:33.805 09:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:12:33.805 09:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:33.805 09:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:12:33.805 [2024-05-15 09:05:46.137198] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:33.805 [2024-05-15 09:05:46.137285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58198 ] 00:12:34.064 [2024-05-15 09:05:46.271748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.064 [2024-05-15 09:05:46.372481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58198 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 58198 ']' 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 58198 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58198 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:39.367 killing process with pid 58198 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58198' 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 58198 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 58198 00:12:39.367 00:12:39.367 real 0m5.412s 00:12:39.367 user 0m5.075s 00:12:39.367 sys 0m0.235s 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:39.367 09:05:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.367 ************************************ 00:12:39.367 END TEST skip_rpc 00:12:39.367 ************************************ 00:12:39.367 09:05:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:12:39.367 09:05:51 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:39.367 09:05:51 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:39.367 09:05:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.367 ************************************ 00:12:39.367 START TEST skip_rpc_with_json 00:12:39.367 ************************************ 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58279 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58279 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 58279 ']' 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:39.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:39.367 09:05:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:39.367 [2024-05-15 09:05:51.624033] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:39.367 [2024-05-15 09:05:51.624187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58279 ] 00:12:39.367 [2024-05-15 09:05:51.763614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.625 [2024-05-15 09:05:51.868306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.191 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:40.191 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:40.192 [2024-05-15 09:05:52.541987] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:12:40.192 request: 00:12:40.192 { 00:12:40.192 "trtype": "tcp", 00:12:40.192 "method": "nvmf_get_transports", 00:12:40.192 "req_id": 1 00:12:40.192 } 00:12:40.192 Got JSON-RPC error response 00:12:40.192 response: 00:12:40.192 { 00:12:40.192 "code": -19, 00:12:40.192 "message": "No such device" 00:12:40.192 } 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:40.192 [2024-05-15 09:05:52.558084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.192 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:40.450 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.450 09:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:40.450 { 00:12:40.450 "subsystems": [ 00:12:40.450 { 00:12:40.450 "subsystem": "keyring", 00:12:40.450 "config": [] 00:12:40.450 }, 00:12:40.450 { 00:12:40.450 "subsystem": "iobuf", 00:12:40.450 "config": [ 00:12:40.450 { 00:12:40.450 "method": "iobuf_set_options", 00:12:40.450 "params": { 00:12:40.450 "small_pool_count": 8192, 00:12:40.450 "large_pool_count": 1024, 00:12:40.450 "small_bufsize": 8192, 00:12:40.450 "large_bufsize": 135168 00:12:40.450 } 00:12:40.450 } 00:12:40.450 ] 00:12:40.450 }, 00:12:40.450 { 00:12:40.450 "subsystem": "sock", 00:12:40.450 "config": [ 00:12:40.450 { 00:12:40.450 "method": "sock_impl_set_options", 00:12:40.450 "params": { 00:12:40.450 "impl_name": "uring", 00:12:40.450 "recv_buf_size": 2097152, 00:12:40.451 "send_buf_size": 2097152, 00:12:40.451 "enable_recv_pipe": true, 00:12:40.451 "enable_quickack": false, 00:12:40.451 "enable_placement_id": 0, 00:12:40.451 "enable_zerocopy_send_server": false, 00:12:40.451 "enable_zerocopy_send_client": false, 00:12:40.451 "zerocopy_threshold": 0, 00:12:40.451 "tls_version": 0, 00:12:40.451 "enable_ktls": false 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "sock_impl_set_options", 00:12:40.451 "params": { 00:12:40.451 "impl_name": "posix", 00:12:40.451 "recv_buf_size": 2097152, 00:12:40.451 "send_buf_size": 2097152, 00:12:40.451 "enable_recv_pipe": true, 00:12:40.451 "enable_quickack": false, 00:12:40.451 "enable_placement_id": 0, 00:12:40.451 "enable_zerocopy_send_server": true, 00:12:40.451 "enable_zerocopy_send_client": false, 00:12:40.451 "zerocopy_threshold": 0, 00:12:40.451 "tls_version": 0, 00:12:40.451 "enable_ktls": false 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "sock_impl_set_options", 00:12:40.451 "params": { 00:12:40.451 "impl_name": "ssl", 00:12:40.451 "recv_buf_size": 4096, 00:12:40.451 "send_buf_size": 4096, 00:12:40.451 "enable_recv_pipe": true, 00:12:40.451 "enable_quickack": false, 00:12:40.451 "enable_placement_id": 0, 00:12:40.451 "enable_zerocopy_send_server": true, 00:12:40.451 "enable_zerocopy_send_client": false, 00:12:40.451 "zerocopy_threshold": 0, 00:12:40.451 "tls_version": 0, 00:12:40.451 "enable_ktls": false 00:12:40.451 } 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "vmd", 00:12:40.451 "config": [] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "accel", 00:12:40.451 "config": [ 00:12:40.451 { 00:12:40.451 "method": "accel_set_options", 00:12:40.451 "params": { 00:12:40.451 "small_cache_size": 128, 00:12:40.451 "large_cache_size": 16, 00:12:40.451 "task_count": 2048, 00:12:40.451 "sequence_count": 2048, 00:12:40.451 "buf_count": 2048 00:12:40.451 } 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "bdev", 00:12:40.451 "config": [ 00:12:40.451 { 00:12:40.451 "method": "bdev_set_options", 00:12:40.451 "params": { 00:12:40.451 "bdev_io_pool_size": 65535, 00:12:40.451 "bdev_io_cache_size": 256, 00:12:40.451 "bdev_auto_examine": true, 00:12:40.451 "iobuf_small_cache_size": 128, 00:12:40.451 "iobuf_large_cache_size": 16 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "bdev_raid_set_options", 00:12:40.451 "params": { 00:12:40.451 "process_window_size_kb": 1024 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "bdev_iscsi_set_options", 00:12:40.451 "params": { 00:12:40.451 "timeout_sec": 30 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "bdev_nvme_set_options", 00:12:40.451 "params": { 00:12:40.451 "action_on_timeout": "none", 00:12:40.451 "timeout_us": 0, 00:12:40.451 "timeout_admin_us": 0, 00:12:40.451 "keep_alive_timeout_ms": 10000, 00:12:40.451 "arbitration_burst": 0, 00:12:40.451 "low_priority_weight": 0, 00:12:40.451 "medium_priority_weight": 0, 00:12:40.451 "high_priority_weight": 0, 00:12:40.451 "nvme_adminq_poll_period_us": 10000, 00:12:40.451 "nvme_ioq_poll_period_us": 0, 00:12:40.451 "io_queue_requests": 0, 00:12:40.451 "delay_cmd_submit": true, 00:12:40.451 "transport_retry_count": 4, 00:12:40.451 "bdev_retry_count": 3, 00:12:40.451 "transport_ack_timeout": 0, 00:12:40.451 "ctrlr_loss_timeout_sec": 0, 00:12:40.451 "reconnect_delay_sec": 0, 00:12:40.451 "fast_io_fail_timeout_sec": 0, 00:12:40.451 "disable_auto_failback": false, 00:12:40.451 "generate_uuids": false, 00:12:40.451 "transport_tos": 0, 00:12:40.451 "nvme_error_stat": false, 00:12:40.451 "rdma_srq_size": 0, 00:12:40.451 "io_path_stat": false, 00:12:40.451 "allow_accel_sequence": false, 00:12:40.451 "rdma_max_cq_size": 0, 00:12:40.451 "rdma_cm_event_timeout_ms": 0, 00:12:40.451 "dhchap_digests": [ 00:12:40.451 "sha256", 00:12:40.451 "sha384", 00:12:40.451 "sha512" 00:12:40.451 ], 00:12:40.451 "dhchap_dhgroups": [ 00:12:40.451 "null", 00:12:40.451 "ffdhe2048", 00:12:40.451 "ffdhe3072", 00:12:40.451 "ffdhe4096", 00:12:40.451 "ffdhe6144", 00:12:40.451 "ffdhe8192" 00:12:40.451 ] 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "bdev_nvme_set_hotplug", 00:12:40.451 "params": { 00:12:40.451 "period_us": 100000, 00:12:40.451 "enable": false 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "bdev_wait_for_examine" 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "scsi", 00:12:40.451 "config": null 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "scheduler", 00:12:40.451 "config": [ 00:12:40.451 { 00:12:40.451 "method": "framework_set_scheduler", 00:12:40.451 "params": { 00:12:40.451 "name": "static" 00:12:40.451 } 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "vhost_scsi", 00:12:40.451 "config": [] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "vhost_blk", 00:12:40.451 "config": [] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "ublk", 00:12:40.451 "config": [] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "nbd", 00:12:40.451 "config": [] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "nvmf", 00:12:40.451 "config": [ 00:12:40.451 { 00:12:40.451 "method": "nvmf_set_config", 00:12:40.451 "params": { 00:12:40.451 "discovery_filter": "match_any", 00:12:40.451 "admin_cmd_passthru": { 00:12:40.451 "identify_ctrlr": false 00:12:40.451 } 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "nvmf_set_max_subsystems", 00:12:40.451 "params": { 00:12:40.451 "max_subsystems": 1024 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "nvmf_set_crdt", 00:12:40.451 "params": { 00:12:40.451 "crdt1": 0, 00:12:40.451 "crdt2": 0, 00:12:40.451 "crdt3": 0 00:12:40.451 } 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "method": "nvmf_create_transport", 00:12:40.451 "params": { 00:12:40.451 "trtype": "TCP", 00:12:40.451 "max_queue_depth": 128, 00:12:40.451 "max_io_qpairs_per_ctrlr": 127, 00:12:40.451 "in_capsule_data_size": 4096, 00:12:40.451 "max_io_size": 131072, 00:12:40.451 "io_unit_size": 131072, 00:12:40.451 "max_aq_depth": 128, 00:12:40.451 "num_shared_buffers": 511, 00:12:40.451 "buf_cache_size": 4294967295, 00:12:40.451 "dif_insert_or_strip": false, 00:12:40.451 "zcopy": false, 00:12:40.451 "c2h_success": true, 00:12:40.451 "sock_priority": 0, 00:12:40.451 "abort_timeout_sec": 1, 00:12:40.451 "ack_timeout": 0, 00:12:40.451 "data_wr_pool_size": 0 00:12:40.451 } 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "subsystem": "iscsi", 00:12:40.451 "config": [ 00:12:40.451 { 00:12:40.451 "method": "iscsi_set_options", 00:12:40.451 "params": { 00:12:40.451 "node_base": "iqn.2016-06.io.spdk", 00:12:40.451 "max_sessions": 128, 00:12:40.451 "max_connections_per_session": 2, 00:12:40.451 "max_queue_depth": 64, 00:12:40.451 "default_time2wait": 2, 00:12:40.451 "default_time2retain": 20, 00:12:40.451 "first_burst_length": 8192, 00:12:40.451 "immediate_data": true, 00:12:40.451 "allow_duplicated_isid": false, 00:12:40.451 "error_recovery_level": 0, 00:12:40.451 "nop_timeout": 60, 00:12:40.451 "nop_in_interval": 30, 00:12:40.451 "disable_chap": false, 00:12:40.451 "require_chap": false, 00:12:40.451 "mutual_chap": false, 00:12:40.451 "chap_group": 0, 00:12:40.451 "max_large_datain_per_connection": 64, 00:12:40.451 "max_r2t_per_connection": 4, 00:12:40.451 "pdu_pool_size": 36864, 00:12:40.451 "immediate_data_pool_size": 16384, 00:12:40.451 "data_out_pool_size": 2048 00:12:40.451 } 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 } 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58279 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 58279 ']' 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 58279 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58279 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:40.451 killing process with pid 58279 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58279' 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 58279 00:12:40.451 09:05:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 58279 00:12:41.016 09:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58312 00:12:41.017 09:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:41.017 09:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58312 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 58312 ']' 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 58312 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58312 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:46.277 killing process with pid 58312 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58312' 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 58312 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 58312 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:46.277 09:05:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:46.277 ************************************ 00:12:46.278 END TEST skip_rpc_with_json 00:12:46.278 ************************************ 00:12:46.278 00:12:46.278 real 0m7.041s 00:12:46.278 user 0m6.758s 00:12:46.278 sys 0m0.610s 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:46.278 09:05:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:12:46.278 09:05:58 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:46.278 09:05:58 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:46.278 09:05:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.278 ************************************ 00:12:46.278 START TEST skip_rpc_with_delay 00:12:46.278 ************************************ 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:46.278 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:46.278 [2024-05-15 09:05:58.718997] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:12:46.278 [2024-05-15 09:05:58.719499] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:12:46.536 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:12:46.536 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:46.536 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:46.536 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:46.536 00:12:46.536 real 0m0.088s 00:12:46.536 user 0m0.051s 00:12:46.536 sys 0m0.033s 00:12:46.536 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:46.536 09:05:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:12:46.536 ************************************ 00:12:46.536 END TEST skip_rpc_with_delay 00:12:46.536 ************************************ 00:12:46.536 09:05:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:12:46.536 09:05:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:12:46.536 09:05:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:12:46.536 09:05:58 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:46.536 09:05:58 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:46.536 09:05:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.536 ************************************ 00:12:46.536 START TEST exit_on_failed_rpc_init 00:12:46.536 ************************************ 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58416 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58416 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 58416 ']' 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:46.536 09:05:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:46.536 [2024-05-15 09:05:58.869214] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:46.536 [2024-05-15 09:05:58.869621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58416 ] 00:12:46.862 [2024-05-15 09:05:59.008542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.862 [2024-05-15 09:05:59.139308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:47.429 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:47.430 09:05:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:47.430 [2024-05-15 09:05:59.871692] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:47.430 [2024-05-15 09:05:59.872143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58434 ] 00:12:47.688 [2024-05-15 09:06:00.022262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.688 [2024-05-15 09:06:00.127903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.688 [2024-05-15 09:06:00.128408] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:47.688 [2024-05-15 09:06:00.128643] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:47.689 [2024-05-15 09:06:00.128831] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58416 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 58416 ']' 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 58416 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58416 00:12:47.947 killing process with pid 58416 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58416' 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 58416 00:12:47.947 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 58416 00:12:48.513 ************************************ 00:12:48.513 END TEST exit_on_failed_rpc_init 00:12:48.513 ************************************ 00:12:48.513 00:12:48.513 real 0m1.859s 00:12:48.513 user 0m2.173s 00:12:48.513 sys 0m0.408s 00:12:48.513 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:48.513 09:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:48.513 09:06:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:48.513 ************************************ 00:12:48.513 END TEST skip_rpc 00:12:48.513 ************************************ 00:12:48.513 00:12:48.513 real 0m14.737s 00:12:48.513 user 0m14.161s 00:12:48.513 sys 0m1.516s 00:12:48.513 09:06:00 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:48.513 09:06:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.513 09:06:00 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:48.513 09:06:00 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:48.513 09:06:00 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:48.513 09:06:00 -- common/autotest_common.sh@10 -- # set +x 00:12:48.513 ************************************ 00:12:48.513 START TEST rpc_client 00:12:48.513 ************************************ 00:12:48.513 09:06:00 rpc_client -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:48.513 * Looking for test storage... 00:12:48.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:12:48.513 09:06:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:12:48.513 OK 00:12:48.513 09:06:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:48.513 00:12:48.513 real 0m0.110s 00:12:48.513 user 0m0.048s 00:12:48.513 sys 0m0.068s 00:12:48.513 09:06:00 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:48.513 09:06:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:48.513 ************************************ 00:12:48.513 END TEST rpc_client 00:12:48.513 ************************************ 00:12:48.513 09:06:00 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:48.513 09:06:00 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:48.513 09:06:00 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:48.513 09:06:00 -- common/autotest_common.sh@10 -- # set +x 00:12:48.513 ************************************ 00:12:48.513 START TEST json_config 00:12:48.513 ************************************ 00:12:48.513 09:06:00 json_config -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:48.771 09:06:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.771 09:06:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.771 09:06:01 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.771 09:06:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.771 09:06:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.771 09:06:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.771 09:06:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.771 09:06:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.771 09:06:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.771 09:06:01 json_config -- paths/export.sh@5 -- # export PATH 00:12:48.772 09:06:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@47 -- # : 0 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.772 09:06:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:12:48.772 INFO: JSON configuration test init 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:48.772 09:06:01 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:12:48.772 09:06:01 json_config -- json_config/common.sh@9 -- # local app=target 00:12:48.772 09:06:01 json_config -- json_config/common.sh@10 -- # shift 00:12:48.772 09:06:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:48.772 09:06:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:48.772 09:06:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:48.772 09:06:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:48.772 09:06:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:48.772 09:06:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58552 00:12:48.772 09:06:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:48.772 Waiting for target to run... 00:12:48.772 09:06:01 json_config -- json_config/common.sh@25 -- # waitforlisten 58552 /var/tmp/spdk_tgt.sock 00:12:48.772 09:06:01 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@828 -- # '[' -z 58552 ']' 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:48.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:48.772 09:06:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:48.772 [2024-05-15 09:06:01.092469] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:48.772 [2024-05-15 09:06:01.092761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58552 ] 00:12:49.345 [2024-05-15 09:06:01.480644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.345 [2024-05-15 09:06:01.566414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.909 00:12:49.909 09:06:02 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:49.909 09:06:02 json_config -- common/autotest_common.sh@861 -- # return 0 00:12:49.909 09:06:02 json_config -- json_config/common.sh@26 -- # echo '' 00:12:49.909 09:06:02 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:12:49.909 09:06:02 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:12:49.909 09:06:02 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:49.909 09:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:49.909 09:06:02 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:12:49.909 09:06:02 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:12:49.909 09:06:02 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:49.909 09:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:49.909 09:06:02 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:12:49.909 09:06:02 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:12:49.909 09:06:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:12:50.166 09:06:02 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:50.166 09:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:12:50.166 09:06:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:12:50.166 09:06:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@48 -- # local get_types 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:12:50.423 09:06:02 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:50.423 09:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@55 -- # return 0 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:12:50.423 09:06:02 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:50.423 09:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:12:50.423 09:06:02 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:50.423 09:06:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:50.681 MallocForNvmf0 00:12:50.681 09:06:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:50.681 09:06:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:50.937 MallocForNvmf1 00:12:50.937 09:06:03 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:12:50.937 09:06:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:12:51.194 [2024-05-15 09:06:03.627913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.453 09:06:03 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.453 09:06:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.453 09:06:03 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:51.453 09:06:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:51.712 09:06:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:51.712 09:06:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:51.971 09:06:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:51.971 09:06:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:52.229 [2024-05-15 09:06:04.581440] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:52.229 [2024-05-15 09:06:04.582038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:52.229 09:06:04 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:12:52.229 09:06:04 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:52.229 09:06:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:52.229 09:06:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:12:52.229 09:06:04 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:52.229 09:06:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:52.488 09:06:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:12:52.488 09:06:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:52.488 09:06:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:52.488 MallocBdevForConfigChangeCheck 00:12:52.488 09:06:04 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:12:52.488 09:06:04 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:52.488 09:06:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:52.748 09:06:04 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:12:52.748 09:06:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:53.007 INFO: shutting down applications... 00:12:53.007 09:06:05 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:12:53.007 09:06:05 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:12:53.007 09:06:05 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:12:53.007 09:06:05 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:12:53.007 09:06:05 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:12:53.265 Calling clear_iscsi_subsystem 00:12:53.265 Calling clear_nvmf_subsystem 00:12:53.265 Calling clear_nbd_subsystem 00:12:53.265 Calling clear_ublk_subsystem 00:12:53.265 Calling clear_vhost_blk_subsystem 00:12:53.265 Calling clear_vhost_scsi_subsystem 00:12:53.265 Calling clear_bdev_subsystem 00:12:53.265 09:06:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:12:53.265 09:06:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:12:53.265 09:06:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:12:53.265 09:06:05 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:53.265 09:06:05 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:53.265 09:06:05 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:12:53.833 09:06:06 json_config -- json_config/json_config.sh@345 -- # break 00:12:53.833 09:06:06 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:12:53.833 09:06:06 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:12:53.833 09:06:06 json_config -- json_config/common.sh@31 -- # local app=target 00:12:53.833 09:06:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:53.833 09:06:06 json_config -- json_config/common.sh@35 -- # [[ -n 58552 ]] 00:12:53.833 09:06:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58552 00:12:53.833 09:06:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:53.833 09:06:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:53.833 09:06:06 json_config -- json_config/common.sh@41 -- # kill -0 58552 00:12:53.833 09:06:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:12:53.833 [2024-05-15 09:06:06.022771] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:54.092 09:06:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:12:54.092 09:06:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:54.092 09:06:06 json_config -- json_config/common.sh@41 -- # kill -0 58552 00:12:54.092 09:06:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:54.092 09:06:06 json_config -- json_config/common.sh@43 -- # break 00:12:54.092 09:06:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:54.092 SPDK target shutdown done 00:12:54.092 09:06:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:54.092 INFO: relaunching applications... 00:12:54.092 09:06:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:12:54.092 09:06:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:54.092 09:06:06 json_config -- json_config/common.sh@9 -- # local app=target 00:12:54.092 09:06:06 json_config -- json_config/common.sh@10 -- # shift 00:12:54.092 09:06:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:54.092 09:06:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:54.092 09:06:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:54.092 09:06:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:54.092 09:06:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:54.092 Waiting for target to run... 00:12:54.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:54.092 09:06:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58748 00:12:54.092 09:06:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:54.092 09:06:06 json_config -- json_config/common.sh@25 -- # waitforlisten 58748 /var/tmp/spdk_tgt.sock 00:12:54.092 09:06:06 json_config -- common/autotest_common.sh@828 -- # '[' -z 58748 ']' 00:12:54.092 09:06:06 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:54.093 09:06:06 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:54.093 09:06:06 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:54.093 09:06:06 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:54.093 09:06:06 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:54.093 09:06:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:54.351 [2024-05-15 09:06:06.599725] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:54.351 [2024-05-15 09:06:06.600203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58748 ] 00:12:54.610 [2024-05-15 09:06:06.997495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.868 [2024-05-15 09:06:07.080837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.127 [2024-05-15 09:06:07.394246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.127 [2024-05-15 09:06:07.426115] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:55.127 [2024-05-15 09:06:07.426579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:55.127 09:06:07 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:55.127 09:06:07 json_config -- common/autotest_common.sh@861 -- # return 0 00:12:55.127 00:12:55.127 09:06:07 json_config -- json_config/common.sh@26 -- # echo '' 00:12:55.127 09:06:07 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:12:55.127 INFO: Checking if target configuration is the same... 00:12:55.127 09:06:07 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:55.127 09:06:07 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:55.127 09:06:07 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:12:55.127 09:06:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:55.127 + '[' 2 -ne 2 ']' 00:12:55.127 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:55.127 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:55.127 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:55.127 +++ basename /dev/fd/62 00:12:55.127 ++ mktemp /tmp/62.XXX 00:12:55.127 + tmp_file_1=/tmp/62.KFE 00:12:55.127 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:55.127 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:55.385 + tmp_file_2=/tmp/spdk_tgt_config.json.8OI 00:12:55.385 + ret=0 00:12:55.385 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:55.644 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:55.644 + diff -u /tmp/62.KFE /tmp/spdk_tgt_config.json.8OI 00:12:55.644 INFO: JSON config files are the same 00:12:55.645 + echo 'INFO: JSON config files are the same' 00:12:55.645 + rm /tmp/62.KFE /tmp/spdk_tgt_config.json.8OI 00:12:55.645 + exit 0 00:12:55.645 09:06:07 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:12:55.645 INFO: changing configuration and checking if this can be detected... 00:12:55.645 09:06:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:55.645 09:06:07 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:55.645 09:06:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:55.903 09:06:08 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:55.903 09:06:08 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:12:55.903 09:06:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:55.903 + '[' 2 -ne 2 ']' 00:12:55.903 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:55.903 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:55.903 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:55.903 +++ basename /dev/fd/62 00:12:55.903 ++ mktemp /tmp/62.XXX 00:12:55.903 + tmp_file_1=/tmp/62.COu 00:12:55.903 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:55.903 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:55.903 + tmp_file_2=/tmp/spdk_tgt_config.json.3H9 00:12:55.903 + ret=0 00:12:55.903 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:56.469 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:56.469 + diff -u /tmp/62.COu /tmp/spdk_tgt_config.json.3H9 00:12:56.469 + ret=1 00:12:56.469 + echo '=== Start of file: /tmp/62.COu ===' 00:12:56.469 + cat /tmp/62.COu 00:12:56.469 + echo '=== End of file: /tmp/62.COu ===' 00:12:56.469 + echo '' 00:12:56.469 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3H9 ===' 00:12:56.469 + cat /tmp/spdk_tgt_config.json.3H9 00:12:56.469 + echo '=== End of file: /tmp/spdk_tgt_config.json.3H9 ===' 00:12:56.469 + echo '' 00:12:56.469 + rm /tmp/62.COu /tmp/spdk_tgt_config.json.3H9 00:12:56.469 + exit 1 00:12:56.469 INFO: configuration change detected. 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@317 -- # [[ -n 58748 ]] 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@193 -- # uname -s 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 09:06:08 json_config -- json_config/json_config.sh@323 -- # killprocess 58748 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@947 -- # '[' -z 58748 ']' 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@951 -- # kill -0 58748 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@952 -- # uname 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58748 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:56.469 killing process with pid 58748 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58748' 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@966 -- # kill 58748 00:12:56.469 09:06:08 json_config -- common/autotest_common.sh@971 -- # wait 58748 00:12:56.469 [2024-05-15 09:06:08.862792] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:56.728 09:06:09 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:56.728 09:06:09 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:12:56.728 09:06:09 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:56.728 09:06:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:56.988 INFO: Success 00:12:56.988 09:06:09 json_config -- json_config/json_config.sh@328 -- # return 0 00:12:56.988 09:06:09 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:12:56.988 ************************************ 00:12:56.988 END TEST json_config 00:12:56.988 ************************************ 00:12:56.988 00:12:56.988 real 0m8.239s 00:12:56.988 user 0m11.668s 00:12:56.988 sys 0m1.719s 00:12:56.988 09:06:09 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:56.988 09:06:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:56.988 09:06:09 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:56.988 09:06:09 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:56.988 09:06:09 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:56.988 09:06:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.988 ************************************ 00:12:56.988 START TEST json_config_extra_key 00:12:56.988 ************************************ 00:12:56.988 09:06:09 json_config_extra_key -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:56.988 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.988 09:06:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:56.988 09:06:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.988 09:06:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.989 09:06:09 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.989 09:06:09 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.989 09:06:09 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.989 09:06:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.989 09:06:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.989 09:06:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.989 09:06:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:56.989 09:06:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.989 09:06:09 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:56.989 INFO: launching applications... 00:12:56.989 09:06:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58883 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:56.989 Waiting for target to run... 00:12:56.989 09:06:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58883 /var/tmp/spdk_tgt.sock 00:12:56.989 09:06:09 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 58883 ']' 00:12:56.989 09:06:09 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:56.989 09:06:09 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:56.989 09:06:09 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:56.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:56.989 09:06:09 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:56.989 09:06:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:56.989 [2024-05-15 09:06:09.391662] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:56.989 [2024-05-15 09:06:09.392045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58883 ] 00:12:57.556 [2024-05-15 09:06:09.770098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.556 [2024-05-15 09:06:09.854714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.124 00:12:58.124 INFO: shutting down applications... 00:12:58.124 09:06:10 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:58.124 09:06:10 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:58.124 09:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:58.124 09:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58883 ]] 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58883 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58883 00:12:58.124 09:06:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:58.691 SPDK target shutdown done 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58883 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:58.691 09:06:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:58.691 Success 00:12:58.691 09:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:58.691 ************************************ 00:12:58.691 END TEST json_config_extra_key 00:12:58.691 ************************************ 00:12:58.691 00:12:58.691 real 0m1.657s 00:12:58.691 user 0m1.570s 00:12:58.691 sys 0m0.402s 00:12:58.691 09:06:10 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:58.691 09:06:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:58.691 09:06:10 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:58.691 09:06:10 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:58.691 09:06:10 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:58.691 09:06:10 -- common/autotest_common.sh@10 -- # set +x 00:12:58.691 ************************************ 00:12:58.691 START TEST alias_rpc 00:12:58.691 ************************************ 00:12:58.691 09:06:10 alias_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:58.691 * Looking for test storage... 00:12:58.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:58.691 09:06:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:58.691 09:06:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58953 00:12:58.691 09:06:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:58.691 09:06:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58953 00:12:58.691 09:06:11 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 58953 ']' 00:12:58.691 09:06:11 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.691 09:06:11 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:58.691 09:06:11 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.691 09:06:11 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:58.691 09:06:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.691 [2024-05-15 09:06:11.081358] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:12:58.691 [2024-05-15 09:06:11.081740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:12:58.949 [2024-05-15 09:06:11.222172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.949 [2024-05-15 09:06:11.346944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.885 09:06:11 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:59.885 09:06:11 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:59.885 09:06:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:59.885 09:06:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58953 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 58953 ']' 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 58953 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 58953 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:59.885 killing process with pid 58953 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 58953' 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@966 -- # kill 58953 00:12:59.885 09:06:12 alias_rpc -- common/autotest_common.sh@971 -- # wait 58953 00:13:00.451 ************************************ 00:13:00.451 END TEST alias_rpc 00:13:00.451 ************************************ 00:13:00.451 00:13:00.451 real 0m1.712s 00:13:00.451 user 0m1.859s 00:13:00.451 sys 0m0.417s 00:13:00.451 09:06:12 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:00.451 09:06:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.451 09:06:12 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:13:00.451 09:06:12 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:00.451 09:06:12 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:00.451 09:06:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:00.451 09:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:00.451 ************************************ 00:13:00.451 START TEST spdkcli_tcp 00:13:00.451 ************************************ 00:13:00.451 09:06:12 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:00.451 * Looking for test storage... 00:13:00.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:13:00.451 09:06:12 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:00.451 09:06:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59029 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:13:00.451 09:06:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59029 00:13:00.451 09:06:12 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 59029 ']' 00:13:00.451 09:06:12 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.451 09:06:12 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:00.452 09:06:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.452 09:06:12 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:00.452 09:06:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.452 [2024-05-15 09:06:12.860467] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:00.452 [2024-05-15 09:06:12.861128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:13:00.710 [2024-05-15 09:06:13.000036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:00.710 [2024-05-15 09:06:13.133345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.710 [2024-05-15 09:06:13.133355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.643 09:06:13 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:01.643 09:06:13 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:13:01.643 09:06:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59045 00:13:01.643 09:06:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:13:01.643 09:06:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:13:01.643 [ 00:13:01.643 "bdev_malloc_delete", 00:13:01.643 "bdev_malloc_create", 00:13:01.643 "bdev_null_resize", 00:13:01.643 "bdev_null_delete", 00:13:01.643 "bdev_null_create", 00:13:01.643 "bdev_nvme_cuse_unregister", 00:13:01.643 "bdev_nvme_cuse_register", 00:13:01.643 "bdev_opal_new_user", 00:13:01.643 "bdev_opal_set_lock_state", 00:13:01.643 "bdev_opal_delete", 00:13:01.643 "bdev_opal_get_info", 00:13:01.643 "bdev_opal_create", 00:13:01.643 "bdev_nvme_opal_revert", 00:13:01.643 "bdev_nvme_opal_init", 00:13:01.643 "bdev_nvme_send_cmd", 00:13:01.643 "bdev_nvme_get_path_iostat", 00:13:01.643 "bdev_nvme_get_mdns_discovery_info", 00:13:01.643 "bdev_nvme_stop_mdns_discovery", 00:13:01.643 "bdev_nvme_start_mdns_discovery", 00:13:01.643 "bdev_nvme_set_multipath_policy", 00:13:01.643 "bdev_nvme_set_preferred_path", 00:13:01.643 "bdev_nvme_get_io_paths", 00:13:01.643 "bdev_nvme_remove_error_injection", 00:13:01.643 "bdev_nvme_add_error_injection", 00:13:01.643 "bdev_nvme_get_discovery_info", 00:13:01.643 "bdev_nvme_stop_discovery", 00:13:01.643 "bdev_nvme_start_discovery", 00:13:01.643 "bdev_nvme_get_controller_health_info", 00:13:01.643 "bdev_nvme_disable_controller", 00:13:01.643 "bdev_nvme_enable_controller", 00:13:01.643 "bdev_nvme_reset_controller", 00:13:01.643 "bdev_nvme_get_transport_statistics", 00:13:01.643 "bdev_nvme_apply_firmware", 00:13:01.643 "bdev_nvme_detach_controller", 00:13:01.643 "bdev_nvme_get_controllers", 00:13:01.643 "bdev_nvme_attach_controller", 00:13:01.643 "bdev_nvme_set_hotplug", 00:13:01.643 "bdev_nvme_set_options", 00:13:01.643 "bdev_passthru_delete", 00:13:01.643 "bdev_passthru_create", 00:13:01.643 "bdev_lvol_set_parent", 00:13:01.643 "bdev_lvol_check_shallow_copy", 00:13:01.643 "bdev_lvol_start_shallow_copy", 00:13:01.643 "bdev_lvol_grow_lvstore", 00:13:01.643 "bdev_lvol_get_lvols", 00:13:01.643 "bdev_lvol_get_lvstores", 00:13:01.643 "bdev_lvol_delete", 00:13:01.643 "bdev_lvol_set_read_only", 00:13:01.643 "bdev_lvol_resize", 00:13:01.643 "bdev_lvol_decouple_parent", 00:13:01.643 "bdev_lvol_inflate", 00:13:01.643 "bdev_lvol_rename", 00:13:01.643 "bdev_lvol_clone_bdev", 00:13:01.643 "bdev_lvol_clone", 00:13:01.643 "bdev_lvol_snapshot", 00:13:01.643 "bdev_lvol_create", 00:13:01.643 "bdev_lvol_delete_lvstore", 00:13:01.643 "bdev_lvol_rename_lvstore", 00:13:01.643 "bdev_lvol_create_lvstore", 00:13:01.643 "bdev_raid_set_options", 00:13:01.643 "bdev_raid_remove_base_bdev", 00:13:01.643 "bdev_raid_add_base_bdev", 00:13:01.643 "bdev_raid_delete", 00:13:01.643 "bdev_raid_create", 00:13:01.643 "bdev_raid_get_bdevs", 00:13:01.643 "bdev_error_inject_error", 00:13:01.643 "bdev_error_delete", 00:13:01.643 "bdev_error_create", 00:13:01.643 "bdev_split_delete", 00:13:01.643 "bdev_split_create", 00:13:01.643 "bdev_delay_delete", 00:13:01.643 "bdev_delay_create", 00:13:01.643 "bdev_delay_update_latency", 00:13:01.643 "bdev_zone_block_delete", 00:13:01.643 "bdev_zone_block_create", 00:13:01.643 "blobfs_create", 00:13:01.643 "blobfs_detect", 00:13:01.643 "blobfs_set_cache_size", 00:13:01.643 "bdev_aio_delete", 00:13:01.643 "bdev_aio_rescan", 00:13:01.643 "bdev_aio_create", 00:13:01.643 "bdev_ftl_set_property", 00:13:01.643 "bdev_ftl_get_properties", 00:13:01.643 "bdev_ftl_get_stats", 00:13:01.643 "bdev_ftl_unmap", 00:13:01.643 "bdev_ftl_unload", 00:13:01.643 "bdev_ftl_delete", 00:13:01.643 "bdev_ftl_load", 00:13:01.643 "bdev_ftl_create", 00:13:01.643 "bdev_virtio_attach_controller", 00:13:01.643 "bdev_virtio_scsi_get_devices", 00:13:01.643 "bdev_virtio_detach_controller", 00:13:01.643 "bdev_virtio_blk_set_hotplug", 00:13:01.643 "bdev_iscsi_delete", 00:13:01.643 "bdev_iscsi_create", 00:13:01.643 "bdev_iscsi_set_options", 00:13:01.643 "bdev_uring_delete", 00:13:01.643 "bdev_uring_rescan", 00:13:01.643 "bdev_uring_create", 00:13:01.643 "accel_error_inject_error", 00:13:01.643 "ioat_scan_accel_module", 00:13:01.643 "dsa_scan_accel_module", 00:13:01.643 "iaa_scan_accel_module", 00:13:01.643 "keyring_file_remove_key", 00:13:01.643 "keyring_file_add_key", 00:13:01.643 "iscsi_get_histogram", 00:13:01.643 "iscsi_enable_histogram", 00:13:01.643 "iscsi_set_options", 00:13:01.643 "iscsi_get_auth_groups", 00:13:01.643 "iscsi_auth_group_remove_secret", 00:13:01.643 "iscsi_auth_group_add_secret", 00:13:01.643 "iscsi_delete_auth_group", 00:13:01.643 "iscsi_create_auth_group", 00:13:01.643 "iscsi_set_discovery_auth", 00:13:01.643 "iscsi_get_options", 00:13:01.643 "iscsi_target_node_request_logout", 00:13:01.643 "iscsi_target_node_set_redirect", 00:13:01.643 "iscsi_target_node_set_auth", 00:13:01.643 "iscsi_target_node_add_lun", 00:13:01.643 "iscsi_get_stats", 00:13:01.643 "iscsi_get_connections", 00:13:01.643 "iscsi_portal_group_set_auth", 00:13:01.643 "iscsi_start_portal_group", 00:13:01.643 "iscsi_delete_portal_group", 00:13:01.643 "iscsi_create_portal_group", 00:13:01.643 "iscsi_get_portal_groups", 00:13:01.643 "iscsi_delete_target_node", 00:13:01.643 "iscsi_target_node_remove_pg_ig_maps", 00:13:01.643 "iscsi_target_node_add_pg_ig_maps", 00:13:01.643 "iscsi_create_target_node", 00:13:01.643 "iscsi_get_target_nodes", 00:13:01.643 "iscsi_delete_initiator_group", 00:13:01.643 "iscsi_initiator_group_remove_initiators", 00:13:01.643 "iscsi_initiator_group_add_initiators", 00:13:01.643 "iscsi_create_initiator_group", 00:13:01.643 "iscsi_get_initiator_groups", 00:13:01.643 "nvmf_set_crdt", 00:13:01.643 "nvmf_set_config", 00:13:01.643 "nvmf_set_max_subsystems", 00:13:01.643 "nvmf_stop_mdns_prr", 00:13:01.643 "nvmf_publish_mdns_prr", 00:13:01.643 "nvmf_subsystem_get_listeners", 00:13:01.643 "nvmf_subsystem_get_qpairs", 00:13:01.643 "nvmf_subsystem_get_controllers", 00:13:01.643 "nvmf_get_stats", 00:13:01.643 "nvmf_get_transports", 00:13:01.643 "nvmf_create_transport", 00:13:01.643 "nvmf_get_targets", 00:13:01.643 "nvmf_delete_target", 00:13:01.643 "nvmf_create_target", 00:13:01.643 "nvmf_subsystem_allow_any_host", 00:13:01.643 "nvmf_subsystem_remove_host", 00:13:01.643 "nvmf_subsystem_add_host", 00:13:01.643 "nvmf_ns_remove_host", 00:13:01.643 "nvmf_ns_add_host", 00:13:01.643 "nvmf_subsystem_remove_ns", 00:13:01.643 "nvmf_subsystem_add_ns", 00:13:01.643 "nvmf_subsystem_listener_set_ana_state", 00:13:01.643 "nvmf_discovery_get_referrals", 00:13:01.643 "nvmf_discovery_remove_referral", 00:13:01.643 "nvmf_discovery_add_referral", 00:13:01.643 "nvmf_subsystem_remove_listener", 00:13:01.643 "nvmf_subsystem_add_listener", 00:13:01.643 "nvmf_delete_subsystem", 00:13:01.643 "nvmf_create_subsystem", 00:13:01.643 "nvmf_get_subsystems", 00:13:01.643 "env_dpdk_get_mem_stats", 00:13:01.643 "nbd_get_disks", 00:13:01.643 "nbd_stop_disk", 00:13:01.643 "nbd_start_disk", 00:13:01.643 "ublk_recover_disk", 00:13:01.643 "ublk_get_disks", 00:13:01.643 "ublk_stop_disk", 00:13:01.643 "ublk_start_disk", 00:13:01.643 "ublk_destroy_target", 00:13:01.643 "ublk_create_target", 00:13:01.643 "virtio_blk_create_transport", 00:13:01.643 "virtio_blk_get_transports", 00:13:01.643 "vhost_controller_set_coalescing", 00:13:01.643 "vhost_get_controllers", 00:13:01.643 "vhost_delete_controller", 00:13:01.643 "vhost_create_blk_controller", 00:13:01.643 "vhost_scsi_controller_remove_target", 00:13:01.643 "vhost_scsi_controller_add_target", 00:13:01.643 "vhost_start_scsi_controller", 00:13:01.643 "vhost_create_scsi_controller", 00:13:01.643 "thread_set_cpumask", 00:13:01.643 "framework_get_scheduler", 00:13:01.643 "framework_set_scheduler", 00:13:01.643 "framework_get_reactors", 00:13:01.643 "thread_get_io_channels", 00:13:01.643 "thread_get_pollers", 00:13:01.643 "thread_get_stats", 00:13:01.643 "framework_monitor_context_switch", 00:13:01.643 "spdk_kill_instance", 00:13:01.643 "log_enable_timestamps", 00:13:01.643 "log_get_flags", 00:13:01.643 "log_clear_flag", 00:13:01.643 "log_set_flag", 00:13:01.643 "log_get_level", 00:13:01.643 "log_set_level", 00:13:01.643 "log_get_print_level", 00:13:01.643 "log_set_print_level", 00:13:01.643 "framework_enable_cpumask_locks", 00:13:01.643 "framework_disable_cpumask_locks", 00:13:01.643 "framework_wait_init", 00:13:01.643 "framework_start_init", 00:13:01.643 "scsi_get_devices", 00:13:01.643 "bdev_get_histogram", 00:13:01.643 "bdev_enable_histogram", 00:13:01.643 "bdev_set_qos_limit", 00:13:01.643 "bdev_set_qd_sampling_period", 00:13:01.643 "bdev_get_bdevs", 00:13:01.643 "bdev_reset_iostat", 00:13:01.643 "bdev_get_iostat", 00:13:01.643 "bdev_examine", 00:13:01.643 "bdev_wait_for_examine", 00:13:01.643 "bdev_set_options", 00:13:01.643 "notify_get_notifications", 00:13:01.644 "notify_get_types", 00:13:01.644 "accel_get_stats", 00:13:01.644 "accel_set_options", 00:13:01.644 "accel_set_driver", 00:13:01.644 "accel_crypto_key_destroy", 00:13:01.644 "accel_crypto_keys_get", 00:13:01.644 "accel_crypto_key_create", 00:13:01.644 "accel_assign_opc", 00:13:01.644 "accel_get_module_info", 00:13:01.644 "accel_get_opc_assignments", 00:13:01.644 "vmd_rescan", 00:13:01.644 "vmd_remove_device", 00:13:01.644 "vmd_enable", 00:13:01.644 "sock_get_default_impl", 00:13:01.644 "sock_set_default_impl", 00:13:01.644 "sock_impl_set_options", 00:13:01.644 "sock_impl_get_options", 00:13:01.644 "iobuf_get_stats", 00:13:01.644 "iobuf_set_options", 00:13:01.644 "framework_get_pci_devices", 00:13:01.644 "framework_get_config", 00:13:01.644 "framework_get_subsystems", 00:13:01.644 "trace_get_info", 00:13:01.644 "trace_get_tpoint_group_mask", 00:13:01.644 "trace_disable_tpoint_group", 00:13:01.644 "trace_enable_tpoint_group", 00:13:01.644 "trace_clear_tpoint_mask", 00:13:01.644 "trace_set_tpoint_mask", 00:13:01.644 "keyring_get_keys", 00:13:01.644 "spdk_get_version", 00:13:01.644 "rpc_get_methods" 00:13:01.644 ] 00:13:01.644 09:06:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:13:01.644 09:06:14 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:01.644 09:06:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.902 09:06:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:01.902 09:06:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59029 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 59029 ']' 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 59029 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59029 00:13:01.902 killing process with pid 59029 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59029' 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 59029 00:13:01.902 09:06:14 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 59029 00:13:02.160 ************************************ 00:13:02.160 END TEST spdkcli_tcp 00:13:02.160 ************************************ 00:13:02.160 00:13:02.160 real 0m1.829s 00:13:02.160 user 0m3.349s 00:13:02.160 sys 0m0.462s 00:13:02.160 09:06:14 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:02.160 09:06:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.160 09:06:14 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:02.160 09:06:14 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:02.160 09:06:14 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:02.160 09:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:02.160 ************************************ 00:13:02.160 START TEST dpdk_mem_utility 00:13:02.160 ************************************ 00:13:02.160 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:02.417 * Looking for test storage... 00:13:02.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:13:02.417 09:06:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:02.417 09:06:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59115 00:13:02.417 09:06:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:02.417 09:06:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59115 00:13:02.417 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 59115 ']' 00:13:02.417 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.417 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:02.417 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.417 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:02.417 09:06:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:02.417 [2024-05-15 09:06:14.747775] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:02.417 [2024-05-15 09:06:14.748157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:13:02.675 [2024-05-15 09:06:14.890049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.675 [2024-05-15 09:06:14.997807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.241 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:03.241 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:13:03.241 09:06:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:13:03.241 09:06:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:13:03.241 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.241 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:03.241 { 00:13:03.241 "filename": "/tmp/spdk_mem_dump.txt" 00:13:03.241 } 00:13:03.241 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.241 09:06:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:03.501 DPDK memory size 814.000000 MiB in 1 heap(s) 00:13:03.501 1 heaps totaling size 814.000000 MiB 00:13:03.501 size: 814.000000 MiB heap id: 0 00:13:03.501 end heaps---------- 00:13:03.501 8 mempools totaling size 598.116089 MiB 00:13:03.501 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:13:03.501 size: 158.602051 MiB name: PDU_data_out_Pool 00:13:03.501 size: 84.521057 MiB name: bdev_io_59115 00:13:03.501 size: 51.011292 MiB name: evtpool_59115 00:13:03.501 size: 50.003479 MiB name: msgpool_59115 00:13:03.501 size: 21.763794 MiB name: PDU_Pool 00:13:03.501 size: 19.513306 MiB name: SCSI_TASK_Pool 00:13:03.501 size: 0.026123 MiB name: Session_Pool 00:13:03.501 end mempools------- 00:13:03.501 6 memzones totaling size 4.142822 MiB 00:13:03.501 size: 1.000366 MiB name: RG_ring_0_59115 00:13:03.501 size: 1.000366 MiB name: RG_ring_1_59115 00:13:03.501 size: 1.000366 MiB name: RG_ring_4_59115 00:13:03.501 size: 1.000366 MiB name: RG_ring_5_59115 00:13:03.501 size: 0.125366 MiB name: RG_ring_2_59115 00:13:03.501 size: 0.015991 MiB name: RG_ring_3_59115 00:13:03.501 end memzones------- 00:13:03.501 09:06:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:13:03.501 heap id: 0 total size: 814.000000 MiB number of busy elements: 228 number of free elements: 15 00:13:03.501 list of free elements. size: 12.485107 MiB 00:13:03.501 element at address: 0x200000400000 with size: 1.999512 MiB 00:13:03.501 element at address: 0x200018e00000 with size: 0.999878 MiB 00:13:03.501 element at address: 0x200019000000 with size: 0.999878 MiB 00:13:03.501 element at address: 0x200003e00000 with size: 0.996277 MiB 00:13:03.501 element at address: 0x200031c00000 with size: 0.994446 MiB 00:13:03.501 element at address: 0x200013800000 with size: 0.978699 MiB 00:13:03.501 element at address: 0x200007000000 with size: 0.959839 MiB 00:13:03.501 element at address: 0x200019200000 with size: 0.936584 MiB 00:13:03.501 element at address: 0x200000200000 with size: 0.836853 MiB 00:13:03.501 element at address: 0x20001aa00000 with size: 0.567505 MiB 00:13:03.501 element at address: 0x20000b200000 with size: 0.488892 MiB 00:13:03.501 element at address: 0x200000800000 with size: 0.487061 MiB 00:13:03.501 element at address: 0x200019400000 with size: 0.485657 MiB 00:13:03.501 element at address: 0x200027e00000 with size: 0.402893 MiB 00:13:03.501 element at address: 0x200003a00000 with size: 0.351135 MiB 00:13:03.501 list of standard malloc elements. size: 199.252319 MiB 00:13:03.501 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:13:03.501 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:13:03.501 element at address: 0x200018efff80 with size: 1.000122 MiB 00:13:03.501 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:13:03.501 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:13:03.501 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:13:03.501 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:13:03.501 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:13:03.501 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:13:03.501 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003adb300 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003adb500 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003affa80 with size: 0.000183 MiB 00:13:03.501 element at address: 0x200003affb40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:13:03.502 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e67240 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e67300 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6df00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:13:03.502 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:13:03.502 list of memzone associated elements. size: 602.262573 MiB 00:13:03.502 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:13:03.502 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:13:03.502 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:13:03.502 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:13:03.502 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:13:03.503 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59115_0 00:13:03.503 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:13:03.503 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59115_0 00:13:03.503 element at address: 0x200003fff380 with size: 48.003052 MiB 00:13:03.503 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59115_0 00:13:03.503 element at address: 0x2000195be940 with size: 20.255554 MiB 00:13:03.503 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:13:03.503 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:13:03.503 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:13:03.503 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:13:03.503 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59115 00:13:03.503 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:13:03.503 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59115 00:13:03.503 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:13:03.503 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59115 00:13:03.503 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:13:03.503 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:13:03.503 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:13:03.503 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:13:03.503 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:13:03.503 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:13:03.503 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:13:03.503 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:13:03.503 element at address: 0x200003eff180 with size: 1.000488 MiB 00:13:03.503 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59115 00:13:03.503 element at address: 0x200003affc00 with size: 1.000488 MiB 00:13:03.503 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59115 00:13:03.503 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:13:03.503 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59115 00:13:03.503 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:13:03.503 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59115 00:13:03.503 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:13:03.503 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59115 00:13:03.503 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:13:03.503 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:13:03.503 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:13:03.503 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:13:03.503 element at address: 0x20001947c540 with size: 0.250488 MiB 00:13:03.503 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:13:03.503 element at address: 0x200003adf880 with size: 0.125488 MiB 00:13:03.503 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59115 00:13:03.503 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:13:03.503 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:13:03.503 element at address: 0x200027e673c0 with size: 0.023743 MiB 00:13:03.503 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:13:03.503 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:13:03.503 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59115 00:13:03.503 element at address: 0x200027e6d500 with size: 0.002441 MiB 00:13:03.503 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:13:03.503 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:13:03.503 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59115 00:13:03.503 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:13:03.503 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59115 00:13:03.503 element at address: 0x200027e6dfc0 with size: 0.000305 MiB 00:13:03.503 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:13:03.503 09:06:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:13:03.503 09:06:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59115 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 59115 ']' 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 59115 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59115 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59115' 00:13:03.503 killing process with pid 59115 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 59115 00:13:03.503 09:06:15 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 59115 00:13:03.838 00:13:03.838 real 0m1.618s 00:13:03.838 user 0m1.731s 00:13:03.838 sys 0m0.384s 00:13:03.838 09:06:16 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:03.838 ************************************ 00:13:03.838 END TEST dpdk_mem_utility 00:13:03.838 ************************************ 00:13:03.838 09:06:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:04.107 09:06:16 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:04.107 09:06:16 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:04.107 09:06:16 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:04.107 09:06:16 -- common/autotest_common.sh@10 -- # set +x 00:13:04.107 ************************************ 00:13:04.107 START TEST event 00:13:04.107 ************************************ 00:13:04.107 09:06:16 event -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:04.107 * Looking for test storage... 00:13:04.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:04.107 09:06:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:04.107 09:06:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:13:04.107 09:06:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:04.107 09:06:16 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:13:04.107 09:06:16 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:04.107 09:06:16 event -- common/autotest_common.sh@10 -- # set +x 00:13:04.107 ************************************ 00:13:04.107 START TEST event_perf 00:13:04.107 ************************************ 00:13:04.107 09:06:16 event.event_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:04.107 Running I/O for 1 seconds...[2024-05-15 09:06:16.383146] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:04.107 [2024-05-15 09:06:16.384124] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:13:04.107 [2024-05-15 09:06:16.529681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.366 [2024-05-15 09:06:16.653925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.366 [2024-05-15 09:06:16.653985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.366 [2024-05-15 09:06:16.654165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.366 [2024-05-15 09:06:16.654169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.742 Running I/O for 1 seconds... 00:13:05.742 lcore 0: 160241 00:13:05.742 lcore 1: 160241 00:13:05.742 lcore 2: 160243 00:13:05.742 lcore 3: 160242 00:13:05.742 done. 00:13:05.742 ************************************ 00:13:05.742 END TEST event_perf 00:13:05.742 ************************************ 00:13:05.742 00:13:05.742 real 0m1.403s 00:13:05.742 user 0m4.189s 00:13:05.742 sys 0m0.070s 00:13:05.742 09:06:17 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:05.742 09:06:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:13:05.742 09:06:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:05.742 09:06:17 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:05.742 09:06:17 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:05.742 09:06:17 event -- common/autotest_common.sh@10 -- # set +x 00:13:05.742 ************************************ 00:13:05.742 START TEST event_reactor 00:13:05.742 ************************************ 00:13:05.742 09:06:17 event.event_reactor -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:05.742 [2024-05-15 09:06:17.852765] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:05.743 [2024-05-15 09:06:17.853208] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:13:05.743 [2024-05-15 09:06:18.001514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.743 [2024-05-15 09:06:18.124922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.119 test_start 00:13:07.119 oneshot 00:13:07.119 tick 100 00:13:07.119 tick 100 00:13:07.119 tick 250 00:13:07.119 tick 100 00:13:07.119 tick 100 00:13:07.119 tick 100 00:13:07.119 tick 250 00:13:07.119 tick 500 00:13:07.119 tick 100 00:13:07.119 tick 100 00:13:07.119 tick 250 00:13:07.119 tick 100 00:13:07.119 tick 100 00:13:07.119 test_end 00:13:07.119 ************************************ 00:13:07.119 END TEST event_reactor 00:13:07.119 ************************************ 00:13:07.119 00:13:07.119 real 0m1.404s 00:13:07.119 user 0m1.234s 00:13:07.119 sys 0m0.058s 00:13:07.119 09:06:19 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:07.119 09:06:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:13:07.119 09:06:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:07.119 09:06:19 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:07.119 09:06:19 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:07.119 09:06:19 event -- common/autotest_common.sh@10 -- # set +x 00:13:07.119 ************************************ 00:13:07.119 START TEST event_reactor_perf 00:13:07.119 ************************************ 00:13:07.119 09:06:19 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:07.119 [2024-05-15 09:06:19.302282] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:07.119 [2024-05-15 09:06:19.303142] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59260 ] 00:13:07.119 [2024-05-15 09:06:19.439339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.119 [2024-05-15 09:06:19.545626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.511 test_start 00:13:08.511 test_end 00:13:08.511 Performance: 393971 events per second 00:13:08.511 00:13:08.511 real 0m1.367s 00:13:08.511 user 0m1.204s 00:13:08.511 sys 0m0.054s 00:13:08.511 09:06:20 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:08.511 09:06:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 ************************************ 00:13:08.511 END TEST event_reactor_perf 00:13:08.511 ************************************ 00:13:08.511 09:06:20 event -- event/event.sh@49 -- # uname -s 00:13:08.512 09:06:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:13:08.512 09:06:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:08.512 09:06:20 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:08.512 09:06:20 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:08.512 09:06:20 event -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 ************************************ 00:13:08.512 START TEST event_scheduler 00:13:08.512 ************************************ 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:08.512 * Looking for test storage... 00:13:08.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:13:08.512 09:06:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:13:08.512 09:06:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59322 00:13:08.512 09:06:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:13:08.512 09:06:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:13:08.512 09:06:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59322 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 59322 ']' 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:08.512 09:06:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 [2024-05-15 09:06:20.849677] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:08.512 [2024-05-15 09:06:20.850044] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:13:08.771 [2024-05-15 09:06:20.998742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.771 [2024-05-15 09:06:21.135433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.771 [2024-05-15 09:06:21.135588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.771 [2024-05-15 09:06:21.135638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.771 [2024-05-15 09:06:21.135640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.706 09:06:21 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:09.706 09:06:21 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:13:09.706 09:06:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:13:09.706 09:06:21 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.706 09:06:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:09.706 POWER: Env isn't set yet! 00:13:09.706 POWER: Attempting to initialise ACPI cpufreq power management... 00:13:09.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:09.706 POWER: Cannot set governor of lcore 0 to userspace 00:13:09.706 POWER: Attempting to initialise PSTAT power management... 00:13:09.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:09.706 POWER: Cannot set governor of lcore 0 to performance 00:13:09.706 POWER: Attempting to initialise AMD PSTATE power management... 00:13:09.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:09.706 POWER: Cannot set governor of lcore 0 to userspace 00:13:09.706 POWER: Attempting to initialise CPPC power management... 00:13:09.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:09.706 POWER: Cannot set governor of lcore 0 to userspace 00:13:09.706 POWER: Attempting to initialise VM power management... 00:13:09.706 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:13:09.706 POWER: Unable to set Power Management Environment for lcore 0 00:13:09.706 [2024-05-15 09:06:21.891342] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:13:09.706 [2024-05-15 09:06:21.891447] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:13:09.706 [2024-05-15 09:06:21.891503] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:13:09.706 09:06:21 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.706 09:06:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:13:09.707 09:06:21 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 [2024-05-15 09:06:21.976439] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:13:09.707 09:06:21 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:13:09.707 09:06:21 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:09.707 09:06:21 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:09.707 09:06:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 ************************************ 00:13:09.707 START TEST scheduler_create_thread 00:13:09.707 ************************************ 00:13:09.707 09:06:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:13:09.707 09:06:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:13:09.707 09:06:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 2 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 3 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 4 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 5 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 6 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 7 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 8 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 9 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 10 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.707 09:06:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:11.118 09:06:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.118 09:06:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:13:11.118 09:06:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:13:11.118 09:06:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.118 09:06:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:12.498 ************************************ 00:13:12.498 END TEST scheduler_create_thread 00:13:12.498 ************************************ 00:13:12.498 09:06:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:12.498 00:13:12.498 real 0m2.614s 00:13:12.498 user 0m0.019s 00:13:12.498 sys 0m0.005s 00:13:12.498 09:06:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:12.498 09:06:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:12.498 09:06:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:12.498 09:06:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59322 00:13:12.498 09:06:24 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 59322 ']' 00:13:12.498 09:06:24 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 59322 00:13:12.498 09:06:24 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:13:12.498 09:06:24 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:12.498 09:06:24 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59322 00:13:12.498 09:06:24 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:13:12.499 09:06:24 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:13:12.499 09:06:24 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59322' 00:13:12.499 killing process with pid 59322 00:13:12.499 09:06:24 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 59322 00:13:12.499 09:06:24 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 59322 00:13:12.785 [2024-05-15 09:06:25.084503] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:13:13.043 00:13:13.043 real 0m4.622s 00:13:13.043 user 0m8.771s 00:13:13.043 sys 0m0.349s 00:13:13.043 09:06:25 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:13.043 ************************************ 00:13:13.043 END TEST event_scheduler 00:13:13.043 ************************************ 00:13:13.043 09:06:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:13.043 09:06:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:13:13.043 09:06:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:13:13.043 09:06:25 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:13.043 09:06:25 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:13.043 09:06:25 event -- common/autotest_common.sh@10 -- # set +x 00:13:13.043 ************************************ 00:13:13.043 START TEST app_repeat 00:13:13.043 ************************************ 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59421 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:13:13.043 Process app_repeat pid: 59421 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59421' 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:13:13.043 spdk_app_start Round 0 00:13:13.043 09:06:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59421 /var/tmp/spdk-nbd.sock 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 59421 ']' 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:13.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:13.043 09:06:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:13.043 [2024-05-15 09:06:25.425920] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:13.043 [2024-05-15 09:06:25.426166] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59421 ] 00:13:13.301 [2024-05-15 09:06:25.565063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:13.301 [2024-05-15 09:06:25.691806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.301 [2024-05-15 09:06:25.691821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.867 09:06:26 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:13.867 09:06:26 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:13:13.867 09:06:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:14.124 Malloc0 00:13:14.124 09:06:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:14.382 Malloc1 00:13:14.382 09:06:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:14.382 09:06:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:14.640 /dev/nbd0 00:13:14.640 09:06:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:14.640 09:06:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:14.640 1+0 records in 00:13:14.640 1+0 records out 00:13:14.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051597 s, 7.9 MB/s 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:13:14.640 09:06:27 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:13:14.640 09:06:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.640 09:06:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:14.640 09:06:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:14.897 /dev/nbd1 00:13:14.898 09:06:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:14.898 09:06:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.155 09:06:27 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:13:15.155 09:06:27 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:13:15.155 09:06:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:15.156 1+0 records in 00:13:15.156 1+0 records out 00:13:15.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489656 s, 8.4 MB/s 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:13:15.156 09:06:27 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:13:15.156 09:06:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.156 09:06:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.156 09:06:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:15.156 09:06:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.156 09:06:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:15.444 09:06:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:15.444 { 00:13:15.444 "nbd_device": "/dev/nbd0", 00:13:15.444 "bdev_name": "Malloc0" 00:13:15.444 }, 00:13:15.444 { 00:13:15.444 "nbd_device": "/dev/nbd1", 00:13:15.444 "bdev_name": "Malloc1" 00:13:15.444 } 00:13:15.444 ]' 00:13:15.444 09:06:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:15.444 09:06:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:15.444 { 00:13:15.444 "nbd_device": "/dev/nbd0", 00:13:15.444 "bdev_name": "Malloc0" 00:13:15.444 }, 00:13:15.444 { 00:13:15.444 "nbd_device": "/dev/nbd1", 00:13:15.444 "bdev_name": "Malloc1" 00:13:15.444 } 00:13:15.444 ]' 00:13:15.444 09:06:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:15.444 /dev/nbd1' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:15.445 /dev/nbd1' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:15.445 256+0 records in 00:13:15.445 256+0 records out 00:13:15.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715789 s, 146 MB/s 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:15.445 256+0 records in 00:13:15.445 256+0 records out 00:13:15.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293985 s, 35.7 MB/s 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:15.445 256+0 records in 00:13:15.445 256+0 records out 00:13:15.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326529 s, 32.1 MB/s 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.445 09:06:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.011 09:06:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.269 09:06:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:16.593 09:06:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:16.593 09:06:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:16.852 09:06:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:17.111 [2024-05-15 09:06:29.327531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:17.111 [2024-05-15 09:06:29.433869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.111 [2024-05-15 09:06:29.433875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.111 [2024-05-15 09:06:29.479192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:17.111 [2024-05-15 09:06:29.479508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:20.398 spdk_app_start Round 1 00:13:20.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:20.398 09:06:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:20.398 09:06:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:13:20.398 09:06:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59421 /var/tmp/spdk-nbd.sock 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 59421 ']' 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:20.398 09:06:32 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:13:20.398 09:06:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:20.398 Malloc0 00:13:20.398 09:06:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:20.656 Malloc1 00:13:20.656 09:06:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:20.656 09:06:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.656 09:06:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:20.656 09:06:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:20.657 09:06:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:20.920 /dev/nbd0 00:13:20.921 09:06:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.921 09:06:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:20.921 1+0 records in 00:13:20.921 1+0 records out 00:13:20.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452201 s, 9.1 MB/s 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:13:20.921 09:06:33 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:13:20.921 09:06:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.921 09:06:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:20.921 09:06:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:21.489 /dev/nbd1 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:21.489 1+0 records in 00:13:21.489 1+0 records out 00:13:21.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054518 s, 7.5 MB/s 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:13:21.489 09:06:33 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.489 09:06:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:21.748 09:06:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:21.748 { 00:13:21.748 "nbd_device": "/dev/nbd0", 00:13:21.748 "bdev_name": "Malloc0" 00:13:21.748 }, 00:13:21.748 { 00:13:21.748 "nbd_device": "/dev/nbd1", 00:13:21.748 "bdev_name": "Malloc1" 00:13:21.748 } 00:13:21.748 ]' 00:13:21.748 09:06:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:21.748 09:06:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:21.748 { 00:13:21.748 "nbd_device": "/dev/nbd0", 00:13:21.748 "bdev_name": "Malloc0" 00:13:21.748 }, 00:13:21.748 { 00:13:21.748 "nbd_device": "/dev/nbd1", 00:13:21.748 "bdev_name": "Malloc1" 00:13:21.748 } 00:13:21.748 ]' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:21.748 /dev/nbd1' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:21.748 /dev/nbd1' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:21.748 256+0 records in 00:13:21.748 256+0 records out 00:13:21.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00962074 s, 109 MB/s 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:21.748 256+0 records in 00:13:21.748 256+0 records out 00:13:21.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222828 s, 47.1 MB/s 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:21.748 256+0 records in 00:13:21.748 256+0 records out 00:13:21.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285199 s, 36.8 MB/s 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.748 09:06:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:22.006 09:06:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.006 09:06:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.006 09:06:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.007 09:06:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.266 09:06:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:22.525 09:06:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:22.525 09:06:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:22.525 09:06:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:22.525 09:06:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:22.525 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:22.525 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:22.783 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:22.783 09:06:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:22.783 09:06:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:22.783 09:06:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:22.783 09:06:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:22.783 09:06:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:22.783 09:06:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:23.064 09:06:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:23.064 [2024-05-15 09:06:35.433207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:23.323 [2024-05-15 09:06:35.542091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.323 [2024-05-15 09:06:35.542098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.323 [2024-05-15 09:06:35.588951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:23.323 [2024-05-15 09:06:35.589238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:25.855 spdk_app_start Round 2 00:13:25.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:25.855 09:06:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:25.855 09:06:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:13:25.855 09:06:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59421 /var/tmp/spdk-nbd.sock 00:13:25.855 09:06:38 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 59421 ']' 00:13:25.855 09:06:38 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:25.856 09:06:38 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:25.856 09:06:38 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:25.856 09:06:38 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:25.856 09:06:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:26.115 09:06:38 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:26.115 09:06:38 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:13:26.115 09:06:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:26.374 Malloc0 00:13:26.374 09:06:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:26.632 Malloc1 00:13:26.632 09:06:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.632 09:06:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:26.891 /dev/nbd0 00:13:26.891 09:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.891 09:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:26.891 1+0 records in 00:13:26.891 1+0 records out 00:13:26.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285107 s, 14.4 MB/s 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:13:26.891 09:06:39 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:13:26.891 09:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.891 09:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.891 09:06:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:27.150 /dev/nbd1 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:27.150 1+0 records in 00:13:27.150 1+0 records out 00:13:27.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005261 s, 7.8 MB/s 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:13:27.150 09:06:39 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.150 09:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:27.408 09:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:27.408 { 00:13:27.408 "nbd_device": "/dev/nbd0", 00:13:27.408 "bdev_name": "Malloc0" 00:13:27.408 }, 00:13:27.408 { 00:13:27.408 "nbd_device": "/dev/nbd1", 00:13:27.408 "bdev_name": "Malloc1" 00:13:27.408 } 00:13:27.408 ]' 00:13:27.408 09:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:27.408 { 00:13:27.408 "nbd_device": "/dev/nbd0", 00:13:27.408 "bdev_name": "Malloc0" 00:13:27.408 }, 00:13:27.408 { 00:13:27.408 "nbd_device": "/dev/nbd1", 00:13:27.409 "bdev_name": "Malloc1" 00:13:27.409 } 00:13:27.409 ]' 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:27.409 /dev/nbd1' 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:27.409 /dev/nbd1' 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:27.409 256+0 records in 00:13:27.409 256+0 records out 00:13:27.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120473 s, 87.0 MB/s 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.409 09:06:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:27.409 256+0 records in 00:13:27.409 256+0 records out 00:13:27.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260734 s, 40.2 MB/s 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:27.667 256+0 records in 00:13:27.667 256+0 records out 00:13:27.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328687 s, 31.9 MB/s 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.667 09:06:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.926 09:06:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.185 09:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.443 09:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:28.443 09:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:28.443 09:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:28.443 09:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:28.701 09:06:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:28.701 09:06:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:28.960 09:06:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:28.960 [2024-05-15 09:06:41.401199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.219 [2024-05-15 09:06:41.512692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.219 [2024-05-15 09:06:41.512699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.219 [2024-05-15 09:06:41.559456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:29.219 [2024-05-15 09:06:41.559913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:32.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:32.504 09:06:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59421 /var/tmp/spdk-nbd.sock 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 59421 ']' 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:13:32.504 09:06:44 event.app_repeat -- event/event.sh@39 -- # killprocess 59421 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 59421 ']' 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 59421 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59421 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59421' 00:13:32.504 killing process with pid 59421 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@966 -- # kill 59421 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@971 -- # wait 59421 00:13:32.504 spdk_app_start is called in Round 0. 00:13:32.504 Shutdown signal received, stop current app iteration 00:13:32.504 Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 reinitialization... 00:13:32.504 spdk_app_start is called in Round 1. 00:13:32.504 Shutdown signal received, stop current app iteration 00:13:32.504 Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 reinitialization... 00:13:32.504 spdk_app_start is called in Round 2. 00:13:32.504 Shutdown signal received, stop current app iteration 00:13:32.504 Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 reinitialization... 00:13:32.504 spdk_app_start is called in Round 3. 00:13:32.504 Shutdown signal received, stop current app iteration 00:13:32.504 09:06:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:13:32.504 09:06:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:13:32.504 00:13:32.504 real 0m19.324s 00:13:32.504 user 0m42.603s 00:13:32.504 sys 0m3.493s 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:32.504 09:06:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:32.504 ************************************ 00:13:32.504 END TEST app_repeat 00:13:32.504 ************************************ 00:13:32.504 09:06:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:13:32.504 09:06:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:32.504 09:06:44 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:32.504 09:06:44 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:32.504 09:06:44 event -- common/autotest_common.sh@10 -- # set +x 00:13:32.504 ************************************ 00:13:32.504 START TEST cpu_locks 00:13:32.504 ************************************ 00:13:32.504 09:06:44 event.cpu_locks -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:32.504 * Looking for test storage... 00:13:32.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:32.504 09:06:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:13:32.504 09:06:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:13:32.504 09:06:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:13:32.504 09:06:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:13:32.504 09:06:44 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:32.504 09:06:44 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:32.504 09:06:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:32.504 ************************************ 00:13:32.504 START TEST default_locks 00:13:32.504 ************************************ 00:13:32.504 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:13:32.504 09:06:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59854 00:13:32.504 09:06:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59854 00:13:32.504 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 59854 ']' 00:13:32.505 09:06:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:32.505 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.505 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:32.505 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.505 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:32.505 09:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:32.505 [2024-05-15 09:06:44.936525] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:32.505 [2024-05-15 09:06:44.936824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:13:32.763 [2024-05-15 09:06:45.071970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.763 [2024-05-15 09:06:45.178943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.697 09:06:45 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:33.697 09:06:45 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:13:33.697 09:06:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59854 00:13:33.697 09:06:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59854 00:13:33.697 09:06:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59854 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 59854 ']' 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 59854 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59854 00:13:33.989 killing process with pid 59854 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59854' 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 59854 00:13:33.989 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 59854 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59854 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 59854 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 59854 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 59854 ']' 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:34.556 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (59854) - No such process 00:13:34.556 ERROR: process (pid: 59854) is no longer running 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:34.556 00:13:34.556 real 0m1.849s 00:13:34.556 user 0m1.976s 00:13:34.556 sys 0m0.517s 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:34.556 09:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:34.556 ************************************ 00:13:34.556 END TEST default_locks 00:13:34.556 ************************************ 00:13:34.556 09:06:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:13:34.556 09:06:46 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:34.556 09:06:46 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:34.556 09:06:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:34.556 ************************************ 00:13:34.556 START TEST default_locks_via_rpc 00:13:34.556 ************************************ 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59906 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59906 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 59906 ']' 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:34.556 09:06:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.556 [2024-05-15 09:06:46.839983] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:34.556 [2024-05-15 09:06:46.840302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:13:34.556 [2024-05-15 09:06:46.979500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.815 [2024-05-15 09:06:47.106526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.381 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59906 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59906 00:13:35.382 09:06:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59906 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 59906 ']' 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 59906 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59906 00:13:36.010 killing process with pid 59906 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59906' 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 59906 00:13:36.010 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 59906 00:13:36.286 ************************************ 00:13:36.286 END TEST default_locks_via_rpc 00:13:36.286 ************************************ 00:13:36.286 00:13:36.286 real 0m1.883s 00:13:36.286 user 0m1.999s 00:13:36.286 sys 0m0.572s 00:13:36.286 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:36.286 09:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.286 09:06:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:36.286 09:06:48 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:36.286 09:06:48 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:36.286 09:06:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:36.286 ************************************ 00:13:36.286 START TEST non_locking_app_on_locked_coremask 00:13:36.286 ************************************ 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59957 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59957 /var/tmp/spdk.sock 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 59957 ']' 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:36.286 09:06:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:36.546 [2024-05-15 09:06:48.780920] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:36.546 [2024-05-15 09:06:48.781978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:13:36.546 [2024-05-15 09:06:48.923707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.804 [2024-05-15 09:06:49.041316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59973 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59973 /var/tmp/spdk2.sock 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 59973 ']' 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:37.370 09:06:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 [2024-05-15 09:06:49.750006] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:37.370 [2024-05-15 09:06:49.750764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:13:37.628 [2024-05-15 09:06:49.881096] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:37.628 [2024-05-15 09:06:49.881151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.886 [2024-05-15 09:06:50.138449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.452 09:06:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:38.452 09:06:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:13:38.452 09:06:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59957 00:13:38.452 09:06:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59957 00:13:38.452 09:06:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59957 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 59957 ']' 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 59957 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59957 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:39.386 killing process with pid 59957 00:13:39.386 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59957' 00:13:39.387 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 59957 00:13:39.387 09:06:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 59957 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59973 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 59973 ']' 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 59973 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59973 00:13:40.322 killing process with pid 59973 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59973' 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 59973 00:13:40.322 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 59973 00:13:40.581 00:13:40.581 real 0m4.173s 00:13:40.581 user 0m4.725s 00:13:40.581 sys 0m1.079s 00:13:40.581 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:40.581 09:06:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:40.581 ************************************ 00:13:40.581 END TEST non_locking_app_on_locked_coremask 00:13:40.581 ************************************ 00:13:40.581 09:06:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:40.581 09:06:52 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:40.581 09:06:52 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:40.581 09:06:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:40.581 ************************************ 00:13:40.581 START TEST locking_app_on_unlocked_coremask 00:13:40.581 ************************************ 00:13:40.581 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:13:40.581 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60040 00:13:40.581 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:40.581 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60040 /var/tmp/spdk.sock 00:13:40.581 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 60040 ']' 00:13:40.582 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.582 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:40.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.582 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.582 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:40.582 09:06:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:40.582 [2024-05-15 09:06:53.000192] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:40.582 [2024-05-15 09:06:53.000575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:13:40.839 [2024-05-15 09:06:53.141322] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:40.839 [2024-05-15 09:06:53.141639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.839 [2024-05-15 09:06:53.277892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:41.771 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:41.771 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:13:41.771 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60056 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60056 /var/tmp/spdk2.sock 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 60056 ']' 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:41.772 09:06:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:41.772 [2024-05-15 09:06:53.934496] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:41.772 [2024-05-15 09:06:53.934849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:13:41.772 [2024-05-15 09:06:54.065601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.029 [2024-05-15 09:06:54.289897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.604 09:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:42.604 09:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:13:42.604 09:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60056 00:13:42.604 09:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60056 00:13:42.604 09:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60040 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 60040 ']' 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 60040 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60040 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60040' 00:13:43.570 killing process with pid 60040 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 60040 00:13:43.570 09:06:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 60040 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60056 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 60056 ']' 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 60056 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60056 00:13:44.183 killing process with pid 60056 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60056' 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 60056 00:13:44.183 09:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 60056 00:13:44.760 00:13:44.760 real 0m4.246s 00:13:44.760 user 0m4.578s 00:13:44.760 sys 0m1.092s 00:13:44.760 ************************************ 00:13:44.760 END TEST locking_app_on_unlocked_coremask 00:13:44.760 ************************************ 00:13:44.760 09:06:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:44.760 09:06:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:45.019 09:06:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:45.019 09:06:57 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:45.019 09:06:57 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:45.019 09:06:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:45.019 ************************************ 00:13:45.019 START TEST locking_app_on_locked_coremask 00:13:45.019 ************************************ 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60123 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60123 /var/tmp/spdk.sock 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 60123 ']' 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.019 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:45.020 09:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:45.020 [2024-05-15 09:06:57.284088] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:45.020 [2024-05-15 09:06:57.284398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60123 ] 00:13:45.020 [2024-05-15 09:06:57.417215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.379 [2024-05-15 09:06:57.541474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60139 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60139 /var/tmp/spdk2.sock 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 60139 /var/tmp/spdk2.sock 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:13:45.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 60139 /var/tmp/spdk2.sock 00:13:45.963 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 60139 ']' 00:13:45.964 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:45.964 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:45.964 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:45.964 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:45.964 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:45.964 [2024-05-15 09:06:58.259340] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:45.964 [2024-05-15 09:06:58.260215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60139 ] 00:13:45.964 [2024-05-15 09:06:58.391489] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60123 has claimed it. 00:13:45.964 [2024-05-15 09:06:58.391574] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:46.897 ERROR: process (pid: 60139) is no longer running 00:13:46.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (60139) - No such process 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60123 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60123 00:13:46.897 09:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60123 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 60123 ']' 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 60123 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60123 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:47.155 killing process with pid 60123 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60123' 00:13:47.155 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 60123 00:13:47.156 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 60123 00:13:47.725 ************************************ 00:13:47.725 END TEST locking_app_on_locked_coremask 00:13:47.725 ************************************ 00:13:47.725 00:13:47.725 real 0m2.634s 00:13:47.725 user 0m3.000s 00:13:47.725 sys 0m0.664s 00:13:47.725 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:47.725 09:06:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 09:06:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:47.725 09:06:59 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:47.725 09:06:59 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:47.725 09:06:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 ************************************ 00:13:47.725 START TEST locking_overlapped_coremask 00:13:47.725 ************************************ 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60190 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60190 /var/tmp/spdk.sock 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 60190 ']' 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:47.725 09:06:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 [2024-05-15 09:06:59.970233] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:47.725 [2024-05-15 09:06:59.970556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60190 ] 00:13:47.725 [2024-05-15 09:07:00.106424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.996 [2024-05-15 09:07:00.214376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.996 [2024-05-15 09:07:00.214486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.996 [2024-05-15 09:07:00.214485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.562 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:48.562 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:13:48.562 09:07:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60207 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60207 /var/tmp/spdk2.sock 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 60207 /var/tmp/spdk2.sock 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 60207 /var/tmp/spdk2.sock 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 60207 ']' 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:48.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:48.563 09:07:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:48.563 [2024-05-15 09:07:00.923346] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:48.563 [2024-05-15 09:07:00.923754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60207 ] 00:13:48.821 [2024-05-15 09:07:01.067385] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60190 has claimed it. 00:13:48.821 [2024-05-15 09:07:01.067472] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:49.387 ERROR: process (pid: 60207) is no longer running 00:13:49.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (60207) - No such process 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60190 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 60190 ']' 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 60190 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60190 00:13:49.387 killing process with pid 60190 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60190' 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 60190 00:13:49.387 09:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 60190 00:13:49.644 ************************************ 00:13:49.644 END TEST locking_overlapped_coremask 00:13:49.644 ************************************ 00:13:49.644 00:13:49.644 real 0m2.165s 00:13:49.644 user 0m5.916s 00:13:49.644 sys 0m0.415s 00:13:49.644 09:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:49.644 09:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:49.903 09:07:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:49.903 09:07:02 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:49.903 09:07:02 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:49.903 09:07:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:49.903 ************************************ 00:13:49.903 START TEST locking_overlapped_coremask_via_rpc 00:13:49.903 ************************************ 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60248 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60248 /var/tmp/spdk.sock 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 60248 ']' 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:49.903 09:07:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.903 [2024-05-15 09:07:02.197495] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:49.903 [2024-05-15 09:07:02.197910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60248 ] 00:13:49.903 [2024-05-15 09:07:02.340683] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:49.903 [2024-05-15 09:07:02.341440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.161 [2024-05-15 09:07:02.452655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.161 [2024-05-15 09:07:02.452790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.161 [2024-05-15 09:07:02.452801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60268 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60268 /var/tmp/spdk2.sock 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 60268 ']' 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:50.726 09:07:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.726 [2024-05-15 09:07:03.142632] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:50.726 [2024-05-15 09:07:03.142947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60268 ] 00:13:50.982 [2024-05-15 09:07:03.285533] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:50.982 [2024-05-15 09:07:03.285623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.239 [2024-05-15 09:07:03.498732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.239 [2024-05-15 09:07:03.504717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:51.239 [2024-05-15 09:07:03.504720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 [2024-05-15 09:07:04.134727] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60248 has claimed it. 00:13:51.804 request: 00:13:51.804 { 00:13:51.804 "method": "framework_enable_cpumask_locks", 00:13:51.804 "req_id": 1 00:13:51.804 } 00:13:51.804 Got JSON-RPC error response 00:13:51.804 response: 00:13:51.804 { 00:13:51.804 "code": -32603, 00:13:51.804 "message": "Failed to claim CPU core: 2" 00:13:51.804 } 00:13:51.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60248 /var/tmp/spdk.sock 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 60248 ']' 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:51.804 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60268 /var/tmp/spdk2.sock 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 60268 ']' 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:52.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:52.063 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.321 ************************************ 00:13:52.321 END TEST locking_overlapped_coremask_via_rpc 00:13:52.321 ************************************ 00:13:52.321 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:52.321 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:52.321 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:52.321 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:52.321 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:52.321 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:52.321 00:13:52.321 real 0m2.625s 00:13:52.321 user 0m1.325s 00:13:52.322 sys 0m0.223s 00:13:52.322 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:52.322 09:07:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.626 09:07:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:52.626 09:07:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60248 ]] 00:13:52.626 09:07:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60248 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 60248 ']' 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 60248 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60248 00:13:52.626 killing process with pid 60248 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60248' 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 60248 00:13:52.626 09:07:04 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 60248 00:13:53.192 09:07:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60268 ]] 00:13:53.192 09:07:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60268 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 60268 ']' 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 60268 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60268 00:13:53.192 killing process with pid 60268 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60268' 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 60268 00:13:53.192 09:07:05 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 60268 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60248 ]] 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60248 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 60248 ']' 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 60248 00:13:53.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (60248) - No such process 00:13:53.759 Process with pid 60248 is not found 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 60248 is not found' 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60268 ]] 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60268 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 60268 ']' 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 60268 00:13:53.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (60268) - No such process 00:13:53.759 Process with pid 60268 is not found 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 60268 is not found' 00:13:53.759 09:07:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:53.759 00:13:53.759 real 0m21.164s 00:13:53.759 user 0m37.354s 00:13:53.759 sys 0m5.362s 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:53.759 ************************************ 00:13:53.759 END TEST cpu_locks 00:13:53.759 09:07:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:53.759 ************************************ 00:13:53.759 ************************************ 00:13:53.759 END TEST event 00:13:53.759 ************************************ 00:13:53.759 00:13:53.759 real 0m49.721s 00:13:53.759 user 1m35.502s 00:13:53.759 sys 0m9.668s 00:13:53.759 09:07:05 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:53.759 09:07:05 event -- common/autotest_common.sh@10 -- # set +x 00:13:53.759 09:07:06 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:53.759 09:07:06 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:53.759 09:07:06 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:53.759 09:07:06 -- common/autotest_common.sh@10 -- # set +x 00:13:53.759 ************************************ 00:13:53.759 START TEST thread 00:13:53.759 ************************************ 00:13:53.759 09:07:06 thread -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:53.759 * Looking for test storage... 00:13:53.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:53.759 09:07:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:53.759 09:07:06 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:13:53.759 09:07:06 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:53.759 09:07:06 thread -- common/autotest_common.sh@10 -- # set +x 00:13:53.759 ************************************ 00:13:53.759 START TEST thread_poller_perf 00:13:53.759 ************************************ 00:13:53.759 09:07:06 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:53.759 [2024-05-15 09:07:06.144093] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:53.759 [2024-05-15 09:07:06.144185] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60391 ] 00:13:54.017 [2024-05-15 09:07:06.285879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.017 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:54.017 [2024-05-15 09:07:06.458022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.395 ====================================== 00:13:55.395 busy:2113336940 (cyc) 00:13:55.395 total_run_count: 317000 00:13:55.395 tsc_hz: 2100000000 (cyc) 00:13:55.395 ====================================== 00:13:55.395 poller_cost: 6666 (cyc), 3174 (nsec) 00:13:55.395 00:13:55.395 real 0m1.466s 00:13:55.395 user 0m1.276s 00:13:55.395 sys 0m0.080s 00:13:55.395 09:07:07 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:55.395 09:07:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:55.395 ************************************ 00:13:55.395 END TEST thread_poller_perf 00:13:55.395 ************************************ 00:13:55.396 09:07:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:55.396 09:07:07 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:13:55.396 09:07:07 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:55.396 09:07:07 thread -- common/autotest_common.sh@10 -- # set +x 00:13:55.396 ************************************ 00:13:55.396 START TEST thread_poller_perf 00:13:55.396 ************************************ 00:13:55.396 09:07:07 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:55.396 [2024-05-15 09:07:07.661634] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:55.396 [2024-05-15 09:07:07.661728] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60432 ] 00:13:55.396 [2024-05-15 09:07:07.806900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.655 [2024-05-15 09:07:07.924833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.655 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:56.589 ====================================== 00:13:56.589 busy:2102349578 (cyc) 00:13:56.589 total_run_count: 4304000 00:13:56.589 tsc_hz: 2100000000 (cyc) 00:13:56.589 ====================================== 00:13:56.589 poller_cost: 488 (cyc), 232 (nsec) 00:13:56.589 00:13:56.589 real 0m1.385s 00:13:56.589 user 0m1.220s 00:13:56.589 sys 0m0.057s 00:13:56.589 ************************************ 00:13:56.589 END TEST thread_poller_perf 00:13:56.589 ************************************ 00:13:56.589 09:07:09 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:56.589 09:07:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:56.847 09:07:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:56.847 00:13:56.847 real 0m3.038s 00:13:56.847 user 0m2.556s 00:13:56.847 sys 0m0.263s 00:13:56.847 09:07:09 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:56.847 09:07:09 thread -- common/autotest_common.sh@10 -- # set +x 00:13:56.847 ************************************ 00:13:56.847 END TEST thread 00:13:56.847 ************************************ 00:13:56.847 09:07:09 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:56.847 09:07:09 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:56.847 09:07:09 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:56.847 09:07:09 -- common/autotest_common.sh@10 -- # set +x 00:13:56.847 ************************************ 00:13:56.847 START TEST accel 00:13:56.847 ************************************ 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:56.847 * Looking for test storage... 00:13:56.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:56.847 09:07:09 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:13:56.847 09:07:09 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:13:56.847 09:07:09 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:56.847 09:07:09 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=60501 00:13:56.847 09:07:09 accel -- accel/accel.sh@63 -- # waitforlisten 60501 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@828 -- # '[' -z 60501 ']' 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:56.847 09:07:09 accel -- accel/accel.sh@61 -- # build_accel_config 00:13:56.847 09:07:09 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:56.847 09:07:09 accel -- common/autotest_common.sh@10 -- # set +x 00:13:56.847 09:07:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:56.847 09:07:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:56.847 09:07:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:56.847 09:07:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:56.847 09:07:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:56.847 09:07:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:56.847 09:07:09 accel -- accel/accel.sh@41 -- # jq -r . 00:13:56.847 [2024-05-15 09:07:09.279956] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:56.847 [2024-05-15 09:07:09.280080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60501 ] 00:13:57.106 [2024-05-15 09:07:09.424604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.106 [2024-05-15 09:07:09.536752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.040 09:07:10 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:58.040 09:07:10 accel -- common/autotest_common.sh@861 -- # return 0 00:13:58.040 09:07:10 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:58.040 09:07:10 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:58.040 09:07:10 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:13:58.040 09:07:10 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:58.040 09:07:10 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:58.040 09:07:10 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:58.040 09:07:10 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:58.040 09:07:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:58.040 09:07:10 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:58.040 09:07:10 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.040 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.040 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.040 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # IFS== 00:13:58.041 09:07:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:58.041 09:07:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:58.041 09:07:10 accel -- accel/accel.sh@75 -- # killprocess 60501 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@947 -- # '[' -z 60501 ']' 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@951 -- # kill -0 60501 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@952 -- # uname 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60501 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60501' 00:13:58.041 killing process with pid 60501 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@966 -- # kill 60501 00:13:58.041 09:07:10 accel -- common/autotest_common.sh@971 -- # wait 60501 00:13:58.298 09:07:10 accel -- accel/accel.sh@76 -- # trap - ERR 00:13:58.298 09:07:10 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:58.298 09:07:10 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:58.298 09:07:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:58.298 09:07:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:58.298 09:07:10 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:13:58.298 09:07:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:13:58.298 09:07:10 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:58.298 09:07:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:13:58.554 09:07:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:58.554 09:07:10 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:13:58.554 09:07:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:58.554 09:07:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:58.554 ************************************ 00:13:58.554 START TEST accel_missing_filename 00:13:58.554 ************************************ 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:13:58.554 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:58.555 09:07:10 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:13:58.555 09:07:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:13:58.555 [2024-05-15 09:07:10.809113] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:58.555 [2024-05-15 09:07:10.809936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60558 ] 00:13:58.555 [2024-05-15 09:07:10.945698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.812 [2024-05-15 09:07:11.051649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.812 [2024-05-15 09:07:11.097030] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.812 [2024-05-15 09:07:11.158989] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:59.143 A filename is required. 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:59.143 00:13:59.143 real 0m0.485s 00:13:59.143 user 0m0.309s 00:13:59.143 sys 0m0.110s 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:59.143 09:07:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:13:59.143 ************************************ 00:13:59.143 END TEST accel_missing_filename 00:13:59.143 ************************************ 00:13:59.143 09:07:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:59.143 09:07:11 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:13:59.143 09:07:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:59.143 09:07:11 accel -- common/autotest_common.sh@10 -- # set +x 00:13:59.143 ************************************ 00:13:59.143 START TEST accel_compress_verify 00:13:59.143 ************************************ 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.143 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:59.143 09:07:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:13:59.143 [2024-05-15 09:07:11.340389] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:59.143 [2024-05-15 09:07:11.340751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60577 ] 00:13:59.143 [2024-05-15 09:07:11.476503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.399 [2024-05-15 09:07:11.600788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.399 [2024-05-15 09:07:11.650044] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:59.399 [2024-05-15 09:07:11.713241] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:59.399 00:13:59.399 Compression does not support the verify option, aborting. 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:59.399 00:13:59.399 real 0m0.508s 00:13:59.399 user 0m0.327s 00:13:59.399 sys 0m0.112s 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:59.399 09:07:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:13:59.399 ************************************ 00:13:59.399 END TEST accel_compress_verify 00:13:59.399 ************************************ 00:13:59.658 09:07:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:59.658 09:07:11 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:13:59.658 09:07:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:59.658 09:07:11 accel -- common/autotest_common.sh@10 -- # set +x 00:13:59.658 ************************************ 00:13:59.658 START TEST accel_wrong_workload 00:13:59.658 ************************************ 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:13:59.658 09:07:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:13:59.658 Unsupported workload type: foobar 00:13:59.658 [2024-05-15 09:07:11.915866] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:59.658 accel_perf options: 00:13:59.658 [-h help message] 00:13:59.658 [-q queue depth per core] 00:13:59.658 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:59.658 [-T number of threads per core 00:13:59.658 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:59.658 [-t time in seconds] 00:13:59.658 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:59.658 [ dif_verify, , dif_generate, dif_generate_copy 00:13:59.658 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:59.658 [-l for compress/decompress workloads, name of uncompressed input file 00:13:59.658 [-S for crc32c workload, use this seed value (default 0) 00:13:59.658 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:59.658 [-f for fill workload, use this BYTE value (default 255) 00:13:59.658 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:59.658 [-y verify result if this switch is on] 00:13:59.658 [-a tasks to allocate per core (default: same value as -q)] 00:13:59.658 Can be used to spread operations across a wider range of memory. 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:59.658 00:13:59.658 real 0m0.036s 00:13:59.658 user 0m0.014s 00:13:59.658 sys 0m0.018s 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:59.658 09:07:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:13:59.658 ************************************ 00:13:59.658 END TEST accel_wrong_workload 00:13:59.658 ************************************ 00:13:59.658 09:07:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:59.658 09:07:11 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:13:59.658 09:07:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:59.658 09:07:11 accel -- common/autotest_common.sh@10 -- # set +x 00:13:59.658 ************************************ 00:13:59.658 START TEST accel_negative_buffers 00:13:59.658 ************************************ 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.658 09:07:11 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:13:59.658 09:07:11 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:13:59.658 -x option must be non-negative. 00:13:59.658 [2024-05-15 09:07:12.000682] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:59.658 accel_perf options: 00:13:59.658 [-h help message] 00:13:59.658 [-q queue depth per core] 00:13:59.658 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:59.658 [-T number of threads per core 00:13:59.658 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:59.658 [-t time in seconds] 00:13:59.658 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:59.658 [ dif_verify, , dif_generate, dif_generate_copy 00:13:59.658 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:59.658 [-l for compress/decompress workloads, name of uncompressed input file 00:13:59.658 [-S for crc32c workload, use this seed value (default 0) 00:13:59.658 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:59.658 [-f for fill workload, use this BYTE value (default 255) 00:13:59.658 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:59.658 [-y verify result if this switch is on] 00:13:59.658 [-a tasks to allocate per core (default: same value as -q)] 00:13:59.658 Can be used to spread operations across a wider range of memory. 00:13:59.658 09:07:12 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:13:59.658 09:07:12 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:59.658 09:07:12 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:59.658 09:07:12 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:59.658 00:13:59.658 real 0m0.034s 00:13:59.658 user 0m0.016s 00:13:59.658 sys 0m0.016s 00:13:59.658 09:07:12 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:59.659 09:07:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:13:59.659 ************************************ 00:13:59.659 END TEST accel_negative_buffers 00:13:59.659 ************************************ 00:13:59.659 09:07:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:59.659 09:07:12 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:13:59.659 09:07:12 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:59.659 09:07:12 accel -- common/autotest_common.sh@10 -- # set +x 00:13:59.659 ************************************ 00:13:59.659 START TEST accel_crc32c 00:13:59.659 ************************************ 00:13:59.659 09:07:12 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:59.659 09:07:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:59.659 [2024-05-15 09:07:12.088799] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:13:59.659 [2024-05-15 09:07:12.089169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:13:59.948 [2024-05-15 09:07:12.230902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.948 [2024-05-15 09:07:12.351068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:00.206 09:07:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:01.140 09:07:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:01.140 00:14:01.140 real 0m1.508s 00:14:01.140 user 0m1.304s 00:14:01.140 sys 0m0.109s 00:14:01.140 09:07:13 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:01.140 09:07:13 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:14:01.140 ************************************ 00:14:01.140 END TEST accel_crc32c 00:14:01.140 ************************************ 00:14:01.398 09:07:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:14:01.398 09:07:13 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:14:01.398 09:07:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:01.398 09:07:13 accel -- common/autotest_common.sh@10 -- # set +x 00:14:01.398 ************************************ 00:14:01.398 START TEST accel_crc32c_C2 00:14:01.398 ************************************ 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:14:01.398 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:14:01.398 [2024-05-15 09:07:13.655441] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:01.398 [2024-05-15 09:07:13.655856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60675 ] 00:14:01.398 [2024-05-15 09:07:13.799345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.656 [2024-05-15 09:07:13.922711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:01.656 09:07:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:01.656 09:07:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:03.029 00:14:03.029 real 0m1.532s 00:14:03.029 user 0m1.313s 00:14:03.029 sys 0m0.122s 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:03.029 09:07:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:14:03.029 ************************************ 00:14:03.029 END TEST accel_crc32c_C2 00:14:03.029 ************************************ 00:14:03.030 09:07:15 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:14:03.030 09:07:15 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:14:03.030 09:07:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:03.030 09:07:15 accel -- common/autotest_common.sh@10 -- # set +x 00:14:03.030 ************************************ 00:14:03.030 START TEST accel_copy 00:14:03.030 ************************************ 00:14:03.030 09:07:15 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:14:03.030 09:07:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:14:03.030 [2024-05-15 09:07:15.232456] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:03.030 [2024-05-15 09:07:15.233377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:14:03.030 [2024-05-15 09:07:15.374267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.288 [2024-05-15 09:07:15.499238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:03.288 09:07:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:14:04.688 09:07:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:04.688 00:14:04.688 real 0m1.629s 00:14:04.688 user 0m1.421s 00:14:04.688 sys 0m0.109s 00:14:04.688 09:07:16 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:04.688 09:07:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:14:04.688 ************************************ 00:14:04.688 END TEST accel_copy 00:14:04.688 ************************************ 00:14:04.689 09:07:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:04.689 09:07:16 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:14:04.689 09:07:16 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:04.689 09:07:16 accel -- common/autotest_common.sh@10 -- # set +x 00:14:04.689 ************************************ 00:14:04.689 START TEST accel_fill 00:14:04.689 ************************************ 00:14:04.689 09:07:16 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:14:04.689 09:07:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:14:04.689 [2024-05-15 09:07:16.917699] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:04.689 [2024-05-15 09:07:16.918013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60750 ] 00:14:04.689 [2024-05-15 09:07:17.057970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.947 [2024-05-15 09:07:17.164210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:04.947 09:07:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 ************************************ 00:14:06.324 END TEST accel_fill 00:14:06.324 ************************************ 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:14:06.324 09:07:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:06.324 00:14:06.324 real 0m1.486s 00:14:06.324 user 0m1.288s 00:14:06.324 sys 0m0.099s 00:14:06.324 09:07:18 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:06.324 09:07:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:14:06.324 09:07:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:14:06.324 09:07:18 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:14:06.324 09:07:18 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:06.324 09:07:18 accel -- common/autotest_common.sh@10 -- # set +x 00:14:06.324 ************************************ 00:14:06.324 START TEST accel_copy_crc32c 00:14:06.324 ************************************ 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:14:06.324 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:14:06.325 [2024-05-15 09:07:18.451735] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:06.325 [2024-05-15 09:07:18.452815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:14:06.325 [2024-05-15 09:07:18.597785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.325 [2024-05-15 09:07:18.697913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:06.325 09:07:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 ************************************ 00:14:07.736 END TEST accel_copy_crc32c 00:14:07.736 ************************************ 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:07.736 00:14:07.736 real 0m1.488s 00:14:07.736 user 0m1.297s 00:14:07.736 sys 0m0.099s 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:07.736 09:07:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:14:07.736 09:07:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:14:07.736 09:07:19 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:14:07.736 09:07:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:07.736 09:07:19 accel -- common/autotest_common.sh@10 -- # set +x 00:14:07.736 ************************************ 00:14:07.736 START TEST accel_copy_crc32c_C2 00:14:07.736 ************************************ 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:14:07.736 09:07:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:14:07.736 [2024-05-15 09:07:19.997336] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:07.736 [2024-05-15 09:07:19.997693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60819 ] 00:14:07.736 [2024-05-15 09:07:20.142525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.995 [2024-05-15 09:07:20.253318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:07.995 09:07:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.369 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:09.369 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.369 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:09.369 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:09.370 00:14:09.370 real 0m1.505s 00:14:09.370 user 0m1.295s 00:14:09.370 sys 0m0.109s 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:09.370 09:07:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:14:09.370 ************************************ 00:14:09.370 END TEST accel_copy_crc32c_C2 00:14:09.370 ************************************ 00:14:09.370 09:07:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:14:09.370 09:07:21 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:14:09.370 09:07:21 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:09.370 09:07:21 accel -- common/autotest_common.sh@10 -- # set +x 00:14:09.370 ************************************ 00:14:09.370 START TEST accel_dualcast 00:14:09.370 ************************************ 00:14:09.370 09:07:21 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:14:09.370 09:07:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:14:09.370 [2024-05-15 09:07:21.549451] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:09.370 [2024-05-15 09:07:21.549897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:14:09.370 [2024-05-15 09:07:21.694271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.370 [2024-05-15 09:07:21.802640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:09.629 09:07:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:14:11.003 09:07:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:11.003 00:14:11.003 real 0m1.498s 00:14:11.003 user 0m1.296s 00:14:11.003 sys 0m0.104s 00:14:11.003 09:07:23 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:11.003 09:07:23 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:14:11.003 ************************************ 00:14:11.003 END TEST accel_dualcast 00:14:11.003 ************************************ 00:14:11.003 09:07:23 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:14:11.003 09:07:23 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:14:11.003 09:07:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:11.003 09:07:23 accel -- common/autotest_common.sh@10 -- # set +x 00:14:11.003 ************************************ 00:14:11.003 START TEST accel_compare 00:14:11.003 ************************************ 00:14:11.003 09:07:23 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:14:11.003 [2024-05-15 09:07:23.098363] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:11.003 [2024-05-15 09:07:23.098610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60888 ] 00:14:11.003 [2024-05-15 09:07:23.238576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.003 [2024-05-15 09:07:23.339120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:11.003 09:07:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:14:12.398 09:07:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:12.398 00:14:12.398 real 0m1.475s 00:14:12.398 user 0m1.281s 00:14:12.398 sys 0m0.097s 00:14:12.398 09:07:24 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:12.398 09:07:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:14:12.398 ************************************ 00:14:12.398 END TEST accel_compare 00:14:12.398 ************************************ 00:14:12.398 09:07:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:14:12.398 09:07:24 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:14:12.398 09:07:24 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:12.398 09:07:24 accel -- common/autotest_common.sh@10 -- # set +x 00:14:12.398 ************************************ 00:14:12.398 START TEST accel_xor 00:14:12.398 ************************************ 00:14:12.398 09:07:24 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:14:12.398 09:07:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:14:12.398 [2024-05-15 09:07:24.632892] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:12.398 [2024-05-15 09:07:24.633209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:14:12.398 [2024-05-15 09:07:24.777516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.656 [2024-05-15 09:07:24.884720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.656 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:12.657 09:07:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:13.669 09:07:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:13.669 00:14:13.669 real 0m1.505s 00:14:13.669 user 0m1.296s 00:14:13.669 sys 0m0.111s 00:14:13.669 09:07:26 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:13.929 09:07:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:14:13.929 ************************************ 00:14:13.929 END TEST accel_xor 00:14:13.929 ************************************ 00:14:13.929 09:07:26 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:14:13.929 09:07:26 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:14:13.929 09:07:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:13.929 09:07:26 accel -- common/autotest_common.sh@10 -- # set +x 00:14:13.929 ************************************ 00:14:13.929 START TEST accel_xor 00:14:13.929 ************************************ 00:14:13.929 09:07:26 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:14:13.929 09:07:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:14:13.929 [2024-05-15 09:07:26.198646] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:13.929 [2024-05-15 09:07:26.199011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:14:13.929 [2024-05-15 09:07:26.339786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.190 [2024-05-15 09:07:26.460005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:14.190 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:14.191 09:07:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:15.605 09:07:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:15.605 00:14:15.605 real 0m1.519s 00:14:15.605 user 0m1.315s 00:14:15.605 sys 0m0.109s 00:14:15.605 09:07:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:15.605 09:07:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:14:15.605 ************************************ 00:14:15.605 END TEST accel_xor 00:14:15.605 ************************************ 00:14:15.605 09:07:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:14:15.605 09:07:27 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:14:15.605 09:07:27 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:15.605 09:07:27 accel -- common/autotest_common.sh@10 -- # set +x 00:14:15.605 ************************************ 00:14:15.605 START TEST accel_dif_verify 00:14:15.605 ************************************ 00:14:15.605 09:07:27 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:14:15.605 09:07:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:14:15.605 09:07:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:14:15.605 09:07:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.605 09:07:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:14:15.605 09:07:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:14:15.606 09:07:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:14:15.606 [2024-05-15 09:07:27.773623] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:15.606 [2024-05-15 09:07:27.773928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:14:15.606 [2024-05-15 09:07:27.917025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.884 [2024-05-15 09:07:28.038702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:15.884 09:07:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:16.816 09:07:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:16.816 09:07:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:16.816 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:16.816 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:16.816 09:07:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:14:17.074 09:07:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:17.074 00:14:17.074 real 0m1.523s 00:14:17.074 user 0m1.313s 00:14:17.074 sys 0m0.109s 00:14:17.074 09:07:29 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:17.074 09:07:29 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:14:17.074 ************************************ 00:14:17.074 END TEST accel_dif_verify 00:14:17.074 ************************************ 00:14:17.074 09:07:29 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:14:17.074 09:07:29 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:14:17.074 09:07:29 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:17.074 09:07:29 accel -- common/autotest_common.sh@10 -- # set +x 00:14:17.074 ************************************ 00:14:17.074 START TEST accel_dif_generate 00:14:17.074 ************************************ 00:14:17.074 09:07:29 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:14:17.074 09:07:29 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:14:17.074 [2024-05-15 09:07:29.347995] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:17.074 [2024-05-15 09:07:29.348362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:14:17.074 [2024-05-15 09:07:29.492740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.332 [2024-05-15 09:07:29.605451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:14:17.332 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:17.333 09:07:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 ************************************ 00:14:18.753 END TEST accel_dif_generate 00:14:18.753 ************************************ 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:14:18.753 09:07:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:18.753 00:14:18.753 real 0m1.508s 00:14:18.753 user 0m1.305s 00:14:18.753 sys 0m0.111s 00:14:18.753 09:07:30 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:18.753 09:07:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:14:18.753 09:07:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:14:18.753 09:07:30 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:14:18.753 09:07:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:18.753 09:07:30 accel -- common/autotest_common.sh@10 -- # set +x 00:14:18.753 ************************************ 00:14:18.753 START TEST accel_dif_generate_copy 00:14:18.753 ************************************ 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:14:18.753 09:07:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:14:18.753 [2024-05-15 09:07:30.907057] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:18.753 [2024-05-15 09:07:30.907423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61055 ] 00:14:18.753 [2024-05-15 09:07:31.051443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.753 [2024-05-15 09:07:31.189769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.011 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:19.012 09:07:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:20.385 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:20.386 00:14:20.386 real 0m1.531s 00:14:20.386 user 0m1.312s 00:14:20.386 sys 0m0.123s 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:20.386 09:07:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:14:20.386 ************************************ 00:14:20.386 END TEST accel_dif_generate_copy 00:14:20.386 ************************************ 00:14:20.386 09:07:32 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:14:20.386 09:07:32 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:20.386 09:07:32 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:14:20.386 09:07:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:20.386 09:07:32 accel -- common/autotest_common.sh@10 -- # set +x 00:14:20.386 ************************************ 00:14:20.386 START TEST accel_comp 00:14:20.386 ************************************ 00:14:20.386 09:07:32 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:14:20.386 [2024-05-15 09:07:32.492438] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:20.386 [2024-05-15 09:07:32.493479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61094 ] 00:14:20.386 [2024-05-15 09:07:32.630924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.386 [2024-05-15 09:07:32.745784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:20.386 09:07:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:14:21.798 09:07:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:21.798 00:14:21.798 real 0m1.505s 00:14:21.798 user 0m1.290s 00:14:21.798 sys 0m0.116s 00:14:21.798 09:07:33 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:21.798 09:07:33 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:14:21.798 ************************************ 00:14:21.798 END TEST accel_comp 00:14:21.798 ************************************ 00:14:21.798 09:07:34 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:21.798 09:07:34 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:14:21.798 09:07:34 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:21.798 09:07:34 accel -- common/autotest_common.sh@10 -- # set +x 00:14:21.798 ************************************ 00:14:21.798 START TEST accel_decomp 00:14:21.798 ************************************ 00:14:21.798 09:07:34 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:21.798 09:07:34 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:14:21.799 09:07:34 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:14:21.799 [2024-05-15 09:07:34.054474] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:21.799 [2024-05-15 09:07:34.054795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61124 ] 00:14:21.799 [2024-05-15 09:07:34.203842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.057 [2024-05-15 09:07:34.322783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.057 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:22.058 09:07:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:23.432 09:07:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:23.432 00:14:23.432 real 0m1.530s 00:14:23.432 user 0m1.315s 00:14:23.432 sys 0m0.112s 00:14:23.432 09:07:35 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:23.432 09:07:35 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:14:23.432 ************************************ 00:14:23.432 END TEST accel_decomp 00:14:23.432 ************************************ 00:14:23.432 09:07:35 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:23.432 09:07:35 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:14:23.432 09:07:35 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:23.432 09:07:35 accel -- common/autotest_common.sh@10 -- # set +x 00:14:23.432 ************************************ 00:14:23.432 START TEST accel_decmop_full 00:14:23.432 ************************************ 00:14:23.432 09:07:35 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:14:23.432 09:07:35 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:14:23.432 [2024-05-15 09:07:35.638423] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:23.432 [2024-05-15 09:07:35.638742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61163 ] 00:14:23.432 [2024-05-15 09:07:35.785588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.690 [2024-05-15 09:07:35.900044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.690 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:23.691 09:07:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.063 09:07:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:25.063 09:07:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:25.063 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:25.063 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.063 09:07:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:25.064 09:07:37 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:25.064 00:14:25.064 real 0m1.571s 00:14:25.064 user 0m1.340s 00:14:25.064 sys 0m0.129s 00:14:25.064 09:07:37 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:25.064 09:07:37 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:14:25.064 ************************************ 00:14:25.064 END TEST accel_decmop_full 00:14:25.064 ************************************ 00:14:25.064 09:07:37 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:25.064 09:07:37 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:14:25.064 09:07:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:25.064 09:07:37 accel -- common/autotest_common.sh@10 -- # set +x 00:14:25.064 ************************************ 00:14:25.064 START TEST accel_decomp_mcore 00:14:25.064 ************************************ 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:14:25.064 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:14:25.064 [2024-05-15 09:07:37.255238] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:25.064 [2024-05-15 09:07:37.256037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61199 ] 00:14:25.064 [2024-05-15 09:07:37.390413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.064 [2024-05-15 09:07:37.502506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.064 [2024-05-15 09:07:37.502679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.064 [2024-05-15 09:07:37.502828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.064 [2024-05-15 09:07:37.502839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.322 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:25.323 09:07:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.695 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.695 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.695 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.695 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:26.696 00:14:26.696 real 0m1.531s 00:14:26.696 user 0m4.669s 00:14:26.696 sys 0m0.115s 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:26.696 09:07:38 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:14:26.696 ************************************ 00:14:26.696 END TEST accel_decomp_mcore 00:14:26.696 ************************************ 00:14:26.696 09:07:38 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:26.696 09:07:38 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:14:26.696 09:07:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:26.696 09:07:38 accel -- common/autotest_common.sh@10 -- # set +x 00:14:26.696 ************************************ 00:14:26.696 START TEST accel_decomp_full_mcore 00:14:26.696 ************************************ 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:14:26.696 09:07:38 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:14:26.696 [2024-05-15 09:07:38.840063] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:26.696 [2024-05-15 09:07:38.840422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61236 ] 00:14:26.696 [2024-05-15 09:07:38.981014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.696 [2024-05-15 09:07:39.095808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.696 [2024-05-15 09:07:39.095898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.696 [2024-05-15 09:07:39.096089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.696 [2024-05-15 09:07:39.096088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:26.954 09:07:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:28.345 00:14:28.345 real 0m1.546s 00:14:28.345 user 0m0.013s 00:14:28.345 sys 0m0.001s 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:28.345 09:07:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:14:28.346 ************************************ 00:14:28.346 END TEST accel_decomp_full_mcore 00:14:28.346 ************************************ 00:14:28.346 09:07:40 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:28.346 09:07:40 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:14:28.346 09:07:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:28.346 09:07:40 accel -- common/autotest_common.sh@10 -- # set +x 00:14:28.346 ************************************ 00:14:28.346 START TEST accel_decomp_mthread 00:14:28.346 ************************************ 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:14:28.346 [2024-05-15 09:07:40.448666] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:28.346 [2024-05-15 09:07:40.449502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61274 ] 00:14:28.346 [2024-05-15 09:07:40.592954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.346 [2024-05-15 09:07:40.701053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:28.346 09:07:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:29.728 00:14:29.728 real 0m1.515s 00:14:29.728 user 0m1.299s 00:14:29.728 sys 0m0.116s 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:29.728 09:07:41 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:14:29.728 ************************************ 00:14:29.728 END TEST accel_decomp_mthread 00:14:29.728 ************************************ 00:14:29.728 09:07:41 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:29.728 09:07:41 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:14:29.728 09:07:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:29.728 09:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:14:29.729 ************************************ 00:14:29.729 START TEST accel_decomp_full_mthread 00:14:29.729 ************************************ 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:14:29.729 09:07:41 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:14:29.729 [2024-05-15 09:07:42.022173] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:29.729 [2024-05-15 09:07:42.022566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:14:29.729 [2024-05-15 09:07:42.166793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.009 [2024-05-15 09:07:42.274226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:30.009 09:07:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 ************************************ 00:14:31.379 END TEST accel_decomp_full_mthread 00:14:31.379 ************************************ 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:31.379 00:14:31.379 real 0m1.550s 00:14:31.379 user 0m1.348s 00:14:31.379 sys 0m0.100s 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:31.379 09:07:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:14:31.379 09:07:43 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:14:31.379 09:07:43 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:31.379 09:07:43 accel -- accel/accel.sh@137 -- # build_accel_config 00:14:31.379 09:07:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:31.379 09:07:43 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:14:31.379 09:07:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:31.379 09:07:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:31.379 09:07:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:31.379 09:07:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:31.379 09:07:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:31.379 09:07:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:14:31.379 09:07:43 accel -- accel/accel.sh@41 -- # jq -r . 00:14:31.379 09:07:43 accel -- common/autotest_common.sh@10 -- # set +x 00:14:31.379 ************************************ 00:14:31.379 START TEST accel_dif_functional_tests 00:14:31.379 ************************************ 00:14:31.379 09:07:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:31.379 [2024-05-15 09:07:43.646416] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:31.379 [2024-05-15 09:07:43.646800] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61344 ] 00:14:31.380 [2024-05-15 09:07:43.794397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.637 [2024-05-15 09:07:43.906111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.637 [2024-05-15 09:07:43.906221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.637 [2024-05-15 09:07:43.906221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.637 00:14:31.637 00:14:31.637 CUnit - A unit testing framework for C - Version 2.1-3 00:14:31.637 http://cunit.sourceforge.net/ 00:14:31.637 00:14:31.637 00:14:31.637 Suite: accel_dif 00:14:31.637 Test: verify: DIF generated, GUARD check ...passed 00:14:31.637 Test: verify: DIF generated, APPTAG check ...passed 00:14:31.637 Test: verify: DIF generated, REFTAG check ...passed 00:14:31.637 Test: verify: DIF not generated, GUARD check ...[2024-05-15 09:07:43.995803] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:31.637 [2024-05-15 09:07:43.996175] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:31.637 passed 00:14:31.637 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 09:07:43.996392] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:31.637 [2024-05-15 09:07:43.996609] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:31.637 passed 00:14:31.637 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 09:07:43.996821] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:31.637 [2024-05-15 09:07:43.997073] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:31.637 passed 00:14:31.637 Test: verify: APPTAG correct, APPTAG check ...passed 00:14:31.637 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 09:07:43.997667] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:14:31.637 passed 00:14:31.637 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:14:31.637 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:14:31.637 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:14:31.637 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 09:07:43.998325] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:14:31.637 passed 00:14:31.637 Test: generate copy: DIF generated, GUARD check ...passed 00:14:31.637 Test: generate copy: DIF generated, APTTAG check ...passed 00:14:31.637 Test: generate copy: DIF generated, REFTAG check ...passed 00:14:31.637 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:14:31.637 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:14:31.637 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:14:31.637 Test: generate copy: iovecs-len validate ...[2024-05-15 09:07:43.999739] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:14:31.637 passed 00:14:31.637 Test: generate copy: buffer alignment validate ...passed 00:14:31.637 00:14:31.637 Run Summary: Type Total Ran Passed Failed Inactive 00:14:31.637 suites 1 1 n/a 0 0 00:14:31.637 tests 20 20 20 0 0 00:14:31.637 asserts 204 204 204 0 n/a 00:14:31.637 00:14:31.637 Elapsed time = 0.010 seconds 00:14:31.895 ************************************ 00:14:31.895 END TEST accel_dif_functional_tests 00:14:31.895 ************************************ 00:14:31.895 00:14:31.895 real 0m0.633s 00:14:31.895 user 0m0.799s 00:14:31.895 sys 0m0.150s 00:14:31.895 09:07:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:31.895 09:07:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:14:31.895 00:14:31.895 real 0m35.146s 00:14:31.895 user 0m36.619s 00:14:31.895 sys 0m3.935s 00:14:31.895 09:07:44 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:31.895 ************************************ 00:14:31.895 END TEST accel 00:14:31.895 ************************************ 00:14:31.895 09:07:44 accel -- common/autotest_common.sh@10 -- # set +x 00:14:31.895 09:07:44 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:31.895 09:07:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:31.895 09:07:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:31.895 09:07:44 -- common/autotest_common.sh@10 -- # set +x 00:14:31.895 ************************************ 00:14:31.895 START TEST accel_rpc 00:14:31.895 ************************************ 00:14:31.895 09:07:44 accel_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:32.154 * Looking for test storage... 00:14:32.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:14:32.154 09:07:44 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:32.154 09:07:44 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61414 00:14:32.154 09:07:44 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 61414 00:14:32.154 09:07:44 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:32.154 09:07:44 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 61414 ']' 00:14:32.154 09:07:44 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.154 09:07:44 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.154 09:07:44 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.154 09:07:44 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:32.154 09:07:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.154 [2024-05-15 09:07:44.467948] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:32.154 [2024-05-15 09:07:44.468039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61414 ] 00:14:32.412 [2024-05-15 09:07:44.609180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.412 [2024-05-15 09:07:44.718965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.344 09:07:45 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:33.344 09:07:45 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:14:33.344 09:07:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:14:33.344 09:07:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:14:33.344 09:07:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:14:33.344 09:07:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:14:33.344 09:07:45 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:14:33.344 09:07:45 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:33.344 09:07:45 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:33.344 09:07:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.344 ************************************ 00:14:33.344 START TEST accel_assign_opcode 00:14:33.344 ************************************ 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:33.344 [2024-05-15 09:07:45.475588] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:33.344 [2024-05-15 09:07:45.483578] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:14:33.344 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.345 software 00:14:33.345 00:14:33.345 real 0m0.262s 00:14:33.345 user 0m0.046s 00:14:33.345 sys 0m0.013s 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:33.345 ************************************ 00:14:33.345 END TEST accel_assign_opcode 00:14:33.345 ************************************ 00:14:33.345 09:07:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:33.345 09:07:45 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 61414 00:14:33.345 09:07:45 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 61414 ']' 00:14:33.345 09:07:45 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 61414 00:14:33.345 09:07:45 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:14:33.345 09:07:45 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:33.345 09:07:45 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 61414 00:14:33.602 09:07:45 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:33.602 killing process with pid 61414 00:14:33.602 09:07:45 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:33.602 09:07:45 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 61414' 00:14:33.602 09:07:45 accel_rpc -- common/autotest_common.sh@966 -- # kill 61414 00:14:33.602 09:07:45 accel_rpc -- common/autotest_common.sh@971 -- # wait 61414 00:14:33.860 00:14:33.860 real 0m1.864s 00:14:33.860 user 0m1.989s 00:14:33.860 sys 0m0.437s 00:14:33.860 09:07:46 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:33.860 ************************************ 00:14:33.860 END TEST accel_rpc 00:14:33.860 ************************************ 00:14:33.860 09:07:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.860 09:07:46 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:33.860 09:07:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:33.860 09:07:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:33.860 09:07:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.860 ************************************ 00:14:33.860 START TEST app_cmdline 00:14:33.860 ************************************ 00:14:33.860 09:07:46 app_cmdline -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:34.118 * Looking for test storage... 00:14:34.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:34.118 09:07:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:14:34.118 09:07:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61501 00:14:34.118 09:07:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:14:34.118 09:07:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61501 00:14:34.118 09:07:46 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 61501 ']' 00:14:34.118 09:07:46 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.118 09:07:46 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:34.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.118 09:07:46 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.118 09:07:46 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:34.118 09:07:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:34.118 [2024-05-15 09:07:46.386656] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:34.118 [2024-05-15 09:07:46.386790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61501 ] 00:14:34.118 [2024-05-15 09:07:46.527115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.375 [2024-05-15 09:07:46.685413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:14:35.309 { 00:14:35.309 "version": "SPDK v24.05-pre git sha1 9526734a3", 00:14:35.309 "fields": { 00:14:35.309 "major": 24, 00:14:35.309 "minor": 5, 00:14:35.309 "patch": 0, 00:14:35.309 "suffix": "-pre", 00:14:35.309 "commit": "9526734a3" 00:14:35.309 } 00:14:35.309 } 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:14:35.309 09:07:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:35.309 09:07:47 app_cmdline -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:35.566 request: 00:14:35.566 { 00:14:35.566 "method": "env_dpdk_get_mem_stats", 00:14:35.566 "req_id": 1 00:14:35.566 } 00:14:35.566 Got JSON-RPC error response 00:14:35.566 response: 00:14:35.566 { 00:14:35.566 "code": -32601, 00:14:35.566 "message": "Method not found" 00:14:35.566 } 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:35.567 09:07:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61501 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 61501 ']' 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 61501 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 61501 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:35.567 killing process with pid 61501 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 61501' 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@966 -- # kill 61501 00:14:35.567 09:07:47 app_cmdline -- common/autotest_common.sh@971 -- # wait 61501 00:14:36.131 00:14:36.131 real 0m2.142s 00:14:36.131 user 0m2.673s 00:14:36.131 sys 0m0.485s 00:14:36.131 09:07:48 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:36.131 09:07:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 ************************************ 00:14:36.131 END TEST app_cmdline 00:14:36.131 ************************************ 00:14:36.131 09:07:48 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:36.131 09:07:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:36.131 09:07:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:36.131 09:07:48 -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 ************************************ 00:14:36.131 START TEST version 00:14:36.131 ************************************ 00:14:36.131 09:07:48 version -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:36.131 * Looking for test storage... 00:14:36.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:36.131 09:07:48 version -- app/version.sh@17 -- # get_header_version major 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # cut -f2 00:14:36.131 09:07:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # tr -d '"' 00:14:36.131 09:07:48 version -- app/version.sh@17 -- # major=24 00:14:36.131 09:07:48 version -- app/version.sh@18 -- # get_header_version minor 00:14:36.131 09:07:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # cut -f2 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # tr -d '"' 00:14:36.131 09:07:48 version -- app/version.sh@18 -- # minor=5 00:14:36.131 09:07:48 version -- app/version.sh@19 -- # get_header_version patch 00:14:36.131 09:07:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # tr -d '"' 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # cut -f2 00:14:36.131 09:07:48 version -- app/version.sh@19 -- # patch=0 00:14:36.131 09:07:48 version -- app/version.sh@20 -- # get_header_version suffix 00:14:36.131 09:07:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # tr -d '"' 00:14:36.131 09:07:48 version -- app/version.sh@14 -- # cut -f2 00:14:36.131 09:07:48 version -- app/version.sh@20 -- # suffix=-pre 00:14:36.131 09:07:48 version -- app/version.sh@22 -- # version=24.5 00:14:36.131 09:07:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:14:36.131 09:07:48 version -- app/version.sh@28 -- # version=24.5rc0 00:14:36.131 09:07:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:36.131 09:07:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:14:36.388 09:07:48 version -- app/version.sh@30 -- # py_version=24.5rc0 00:14:36.388 09:07:48 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:14:36.388 00:14:36.388 real 0m0.161s 00:14:36.388 user 0m0.094s 00:14:36.388 sys 0m0.100s 00:14:36.388 09:07:48 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:36.388 ************************************ 00:14:36.388 END TEST version 00:14:36.388 ************************************ 00:14:36.388 09:07:48 version -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 09:07:48 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:14:36.388 09:07:48 -- spdk/autotest.sh@194 -- # uname -s 00:14:36.388 09:07:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:14:36.388 09:07:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:36.388 09:07:48 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:14:36.388 09:07:48 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:14:36.388 09:07:48 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:14:36.388 09:07:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:36.388 09:07:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:36.388 09:07:48 -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 ************************************ 00:14:36.388 START TEST spdk_dd 00:14:36.388 ************************************ 00:14:36.389 09:07:48 spdk_dd -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:14:36.389 * Looking for test storage... 00:14:36.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:36.389 09:07:48 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.389 09:07:48 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.389 09:07:48 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.389 09:07:48 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.389 09:07:48 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.389 09:07:48 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.389 09:07:48 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.389 09:07:48 spdk_dd -- paths/export.sh@5 -- # export PATH 00:14:36.389 09:07:48 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.389 09:07:48 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:36.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.647 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.647 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.905 09:07:49 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:14:36.905 09:07:49 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:14:36.905 09:07:49 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:14:36.905 09:07:49 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:14:36.905 09:07:49 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:14:36.905 09:07:49 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@230 -- # local class 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@232 -- # local progif 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@233 -- # class=01 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@15 -- # local i 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@24 -- # return 0 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@15 -- # local i 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@24 -- # return 0 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:14:36.906 09:07:49 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:36.906 09:07:49 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@139 -- # local lib so 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:14:36.906 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:14:36.907 * spdk_dd linked to liburing 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:36.907 09:07:49 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:36.907 09:07:49 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:36.908 09:07:49 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:14:36.908 09:07:49 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:14:36.908 09:07:49 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:14:36.908 09:07:49 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:14:36.908 09:07:49 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:14:36.908 09:07:49 spdk_dd -- dd/common.sh@157 -- # return 0 00:14:36.908 09:07:49 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:14:36.908 09:07:49 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:14:36.908 09:07:49 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:14:36.908 09:07:49 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:36.908 09:07:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:36.908 ************************************ 00:14:36.908 START TEST spdk_dd_basic_rw 00:14:36.908 ************************************ 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:14:36.908 * Looking for test storage... 00:14:36.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:14:36.908 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:14:37.177 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:14:37.177 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:14:37.178 ************************************ 00:14:37.178 START TEST dd_bs_lt_native_bs 00:14:37.178 ************************************ 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # local es=0 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:37.178 09:07:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:14:37.178 { 00:14:37.178 "subsystems": [ 00:14:37.178 { 00:14:37.178 "subsystem": "bdev", 00:14:37.178 "config": [ 00:14:37.178 { 00:14:37.178 "params": { 00:14:37.178 "trtype": "pcie", 00:14:37.178 "traddr": "0000:00:10.0", 00:14:37.178 "name": "Nvme0" 00:14:37.178 }, 00:14:37.178 "method": "bdev_nvme_attach_controller" 00:14:37.178 }, 00:14:37.178 { 00:14:37.178 "method": "bdev_wait_for_examine" 00:14:37.178 } 00:14:37.178 ] 00:14:37.178 } 00:14:37.178 ] 00:14:37.178 } 00:14:37.178 [2024-05-15 09:07:49.583791] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:37.178 [2024-05-15 09:07:49.583899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61827 ] 00:14:37.437 [2024-05-15 09:07:49.727146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.437 [2024-05-15 09:07:49.851601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.695 [2024-05-15 09:07:50.008618] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:14:37.695 [2024-05-15 09:07:50.008728] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:37.695 [2024-05-15 09:07:50.122328] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # es=234 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # es=106 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # case "$es" in 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@669 -- # es=1 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:37.953 00:14:37.953 real 0m0.730s 00:14:37.953 user 0m0.507s 00:14:37.953 sys 0m0.155s 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 ************************************ 00:14:37.953 END TEST dd_bs_lt_native_bs 00:14:37.953 ************************************ 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 ************************************ 00:14:37.953 START TEST dd_rw 00:14:37.953 ************************************ 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # basic_rw 4096 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:37.953 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:38.518 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:14:38.518 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:38.518 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:38.518 09:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:38.518 { 00:14:38.518 "subsystems": [ 00:14:38.518 { 00:14:38.518 "subsystem": "bdev", 00:14:38.518 "config": [ 00:14:38.518 { 00:14:38.518 "params": { 00:14:38.518 "trtype": "pcie", 00:14:38.518 "traddr": "0000:00:10.0", 00:14:38.518 "name": "Nvme0" 00:14:38.518 }, 00:14:38.518 "method": "bdev_nvme_attach_controller" 00:14:38.518 }, 00:14:38.518 { 00:14:38.518 "method": "bdev_wait_for_examine" 00:14:38.518 } 00:14:38.518 ] 00:14:38.518 } 00:14:38.518 ] 00:14:38.518 } 00:14:38.777 [2024-05-15 09:07:50.966220] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:38.777 [2024-05-15 09:07:50.966343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61859 ] 00:14:38.777 [2024-05-15 09:07:51.110363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.777 [2024-05-15 09:07:51.216827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.292  Copying: 60/60 [kB] (average 29 MBps) 00:14:39.292 00:14:39.292 09:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:39.292 09:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:14:39.292 09:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:39.292 09:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:39.292 { 00:14:39.292 "subsystems": [ 00:14:39.292 { 00:14:39.292 "subsystem": "bdev", 00:14:39.292 "config": [ 00:14:39.292 { 00:14:39.292 "params": { 00:14:39.292 "trtype": "pcie", 00:14:39.292 "traddr": "0000:00:10.0", 00:14:39.292 "name": "Nvme0" 00:14:39.292 }, 00:14:39.292 "method": "bdev_nvme_attach_controller" 00:14:39.292 }, 00:14:39.292 { 00:14:39.292 "method": "bdev_wait_for_examine" 00:14:39.292 } 00:14:39.292 ] 00:14:39.292 } 00:14:39.292 ] 00:14:39.292 } 00:14:39.292 [2024-05-15 09:07:51.635677] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:39.292 [2024-05-15 09:07:51.635761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:14:39.549 [2024-05-15 09:07:51.773341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.549 [2024-05-15 09:07:51.882812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.807  Copying: 60/60 [kB] (average 29 MBps) 00:14:39.807 00:14:39.807 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:40.064 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:40.064 { 00:14:40.064 "subsystems": [ 00:14:40.064 { 00:14:40.064 "subsystem": "bdev", 00:14:40.064 "config": [ 00:14:40.064 { 00:14:40.064 "params": { 00:14:40.064 "trtype": "pcie", 00:14:40.064 "traddr": "0000:00:10.0", 00:14:40.064 "name": "Nvme0" 00:14:40.064 }, 00:14:40.064 "method": "bdev_nvme_attach_controller" 00:14:40.064 }, 00:14:40.064 { 00:14:40.064 "method": "bdev_wait_for_examine" 00:14:40.064 } 00:14:40.064 ] 00:14:40.064 } 00:14:40.064 ] 00:14:40.064 } 00:14:40.064 [2024-05-15 09:07:52.308836] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:40.065 [2024-05-15 09:07:52.308943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61888 ] 00:14:40.065 [2024-05-15 09:07:52.451751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.322 [2024-05-15 09:07:52.561739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.581  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:40.581 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:40.581 09:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:41.146 09:07:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:14:41.146 09:07:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:41.146 09:07:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:41.146 09:07:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:41.146 { 00:14:41.146 "subsystems": [ 00:14:41.146 { 00:14:41.146 "subsystem": "bdev", 00:14:41.146 "config": [ 00:14:41.146 { 00:14:41.146 "params": { 00:14:41.146 "trtype": "pcie", 00:14:41.146 "traddr": "0000:00:10.0", 00:14:41.146 "name": "Nvme0" 00:14:41.146 }, 00:14:41.146 "method": "bdev_nvme_attach_controller" 00:14:41.146 }, 00:14:41.146 { 00:14:41.146 "method": "bdev_wait_for_examine" 00:14:41.146 } 00:14:41.146 ] 00:14:41.146 } 00:14:41.146 ] 00:14:41.146 } 00:14:41.146 [2024-05-15 09:07:53.562447] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:41.146 [2024-05-15 09:07:53.562564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61907 ] 00:14:41.404 [2024-05-15 09:07:53.708281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.404 [2024-05-15 09:07:53.815913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.921  Copying: 60/60 [kB] (average 58 MBps) 00:14:41.921 00:14:41.921 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:14:41.921 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:41.921 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:41.921 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:41.921 [2024-05-15 09:07:54.244902] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:41.921 [2024-05-15 09:07:54.245043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61926 ] 00:14:41.921 { 00:14:41.921 "subsystems": [ 00:14:41.921 { 00:14:41.921 "subsystem": "bdev", 00:14:41.921 "config": [ 00:14:41.921 { 00:14:41.921 "params": { 00:14:41.921 "trtype": "pcie", 00:14:41.921 "traddr": "0000:00:10.0", 00:14:41.921 "name": "Nvme0" 00:14:41.921 }, 00:14:41.921 "method": "bdev_nvme_attach_controller" 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "method": "bdev_wait_for_examine" 00:14:41.921 } 00:14:41.921 ] 00:14:41.921 } 00:14:41.921 ] 00:14:41.921 } 00:14:42.179 [2024-05-15 09:07:54.389851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.179 [2024-05-15 09:07:54.517713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.694  Copying: 60/60 [kB] (average 58 MBps) 00:14:42.694 00:14:42.694 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:42.694 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:42.695 09:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:42.695 [2024-05-15 09:07:54.951512] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:42.695 [2024-05-15 09:07:54.951629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61947 ] 00:14:42.695 { 00:14:42.695 "subsystems": [ 00:14:42.695 { 00:14:42.695 "subsystem": "bdev", 00:14:42.695 "config": [ 00:14:42.695 { 00:14:42.695 "params": { 00:14:42.695 "trtype": "pcie", 00:14:42.695 "traddr": "0000:00:10.0", 00:14:42.695 "name": "Nvme0" 00:14:42.695 }, 00:14:42.695 "method": "bdev_nvme_attach_controller" 00:14:42.695 }, 00:14:42.695 { 00:14:42.695 "method": "bdev_wait_for_examine" 00:14:42.695 } 00:14:42.695 ] 00:14:42.695 } 00:14:42.695 ] 00:14:42.695 } 00:14:42.695 [2024-05-15 09:07:55.088059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.952 [2024-05-15 09:07:55.217202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.209  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:43.209 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:43.209 09:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:43.776 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:14:43.776 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:43.776 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:43.776 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:43.776 [2024-05-15 09:07:56.175303] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:43.776 [2024-05-15 09:07:56.175395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61966 ] 00:14:43.776 { 00:14:43.776 "subsystems": [ 00:14:43.776 { 00:14:43.776 "subsystem": "bdev", 00:14:43.776 "config": [ 00:14:43.776 { 00:14:43.776 "params": { 00:14:43.776 "trtype": "pcie", 00:14:43.776 "traddr": "0000:00:10.0", 00:14:43.776 "name": "Nvme0" 00:14:43.776 }, 00:14:43.776 "method": "bdev_nvme_attach_controller" 00:14:43.776 }, 00:14:43.776 { 00:14:43.776 "method": "bdev_wait_for_examine" 00:14:43.776 } 00:14:43.776 ] 00:14:43.776 } 00:14:43.776 ] 00:14:43.776 } 00:14:44.033 [2024-05-15 09:07:56.314966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.033 [2024-05-15 09:07:56.438971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.549  Copying: 56/56 [kB] (average 54 MBps) 00:14:44.549 00:14:44.549 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:14:44.549 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:44.549 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:44.549 09:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:44.549 [2024-05-15 09:07:56.867480] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:44.549 [2024-05-15 09:07:56.868170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:14:44.549 { 00:14:44.549 "subsystems": [ 00:14:44.549 { 00:14:44.549 "subsystem": "bdev", 00:14:44.549 "config": [ 00:14:44.549 { 00:14:44.549 "params": { 00:14:44.549 "trtype": "pcie", 00:14:44.549 "traddr": "0000:00:10.0", 00:14:44.549 "name": "Nvme0" 00:14:44.549 }, 00:14:44.549 "method": "bdev_nvme_attach_controller" 00:14:44.549 }, 00:14:44.549 { 00:14:44.549 "method": "bdev_wait_for_examine" 00:14:44.549 } 00:14:44.549 ] 00:14:44.549 } 00:14:44.549 ] 00:14:44.549 } 00:14:44.807 [2024-05-15 09:07:57.007131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.807 [2024-05-15 09:07:57.115049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.065  Copying: 56/56 [kB] (average 27 MBps) 00:14:45.065 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:45.065 09:07:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 { 00:14:45.323 "subsystems": [ 00:14:45.323 { 00:14:45.323 "subsystem": "bdev", 00:14:45.323 "config": [ 00:14:45.323 { 00:14:45.323 "params": { 00:14:45.323 "trtype": "pcie", 00:14:45.323 "traddr": "0000:00:10.0", 00:14:45.323 "name": "Nvme0" 00:14:45.323 }, 00:14:45.323 "method": "bdev_nvme_attach_controller" 00:14:45.323 }, 00:14:45.323 { 00:14:45.323 "method": "bdev_wait_for_examine" 00:14:45.323 } 00:14:45.323 ] 00:14:45.323 } 00:14:45.323 ] 00:14:45.323 } 00:14:45.323 [2024-05-15 09:07:57.541948] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:45.323 [2024-05-15 09:07:57.542401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61998 ] 00:14:45.323 [2024-05-15 09:07:57.686728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.589 [2024-05-15 09:07:57.797690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.846  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:45.846 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:45.846 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:46.451 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:14:46.451 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:46.451 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:46.451 09:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:46.451 [2024-05-15 09:07:58.731915] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:46.451 [2024-05-15 09:07:58.732371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62017 ] 00:14:46.451 { 00:14:46.451 "subsystems": [ 00:14:46.451 { 00:14:46.451 "subsystem": "bdev", 00:14:46.451 "config": [ 00:14:46.451 { 00:14:46.451 "params": { 00:14:46.451 "trtype": "pcie", 00:14:46.451 "traddr": "0000:00:10.0", 00:14:46.451 "name": "Nvme0" 00:14:46.451 }, 00:14:46.451 "method": "bdev_nvme_attach_controller" 00:14:46.451 }, 00:14:46.451 { 00:14:46.451 "method": "bdev_wait_for_examine" 00:14:46.452 } 00:14:46.452 ] 00:14:46.452 } 00:14:46.452 ] 00:14:46.452 } 00:14:46.452 [2024-05-15 09:07:58.870728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.710 [2024-05-15 09:07:58.978911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.970  Copying: 56/56 [kB] (average 54 MBps) 00:14:46.970 00:14:46.970 09:07:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:14:46.970 09:07:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:46.970 09:07:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:46.970 09:07:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:46.970 { 00:14:46.970 "subsystems": [ 00:14:46.970 { 00:14:46.970 "subsystem": "bdev", 00:14:46.970 "config": [ 00:14:46.970 { 00:14:46.970 "params": { 00:14:46.970 "trtype": "pcie", 00:14:46.970 "traddr": "0000:00:10.0", 00:14:46.970 "name": "Nvme0" 00:14:46.970 }, 00:14:46.970 "method": "bdev_nvme_attach_controller" 00:14:46.970 }, 00:14:46.970 { 00:14:46.970 "method": "bdev_wait_for_examine" 00:14:46.970 } 00:14:46.970 ] 00:14:46.970 } 00:14:46.970 ] 00:14:46.970 } 00:14:47.229 [2024-05-15 09:07:59.420964] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:47.229 [2024-05-15 09:07:59.421772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62036 ] 00:14:47.229 [2024-05-15 09:07:59.569551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.550 [2024-05-15 09:07:59.685189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.810  Copying: 56/56 [kB] (average 54 MBps) 00:14:47.810 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:47.810 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:47.810 [2024-05-15 09:08:00.110697] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:47.810 [2024-05-15 09:08:00.111598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62057 ] 00:14:47.810 { 00:14:47.810 "subsystems": [ 00:14:47.810 { 00:14:47.810 "subsystem": "bdev", 00:14:47.810 "config": [ 00:14:47.810 { 00:14:47.810 "params": { 00:14:47.810 "trtype": "pcie", 00:14:47.810 "traddr": "0000:00:10.0", 00:14:47.810 "name": "Nvme0" 00:14:47.810 }, 00:14:47.810 "method": "bdev_nvme_attach_controller" 00:14:47.810 }, 00:14:47.810 { 00:14:47.810 "method": "bdev_wait_for_examine" 00:14:47.810 } 00:14:47.810 ] 00:14:47.810 } 00:14:47.810 ] 00:14:47.810 } 00:14:47.810 [2024-05-15 09:08:00.250781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.068 [2024-05-15 09:08:00.357273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.326  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:48.326 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:48.326 09:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:48.891 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:14:48.891 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:48.891 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:48.891 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:48.891 { 00:14:48.891 "subsystems": [ 00:14:48.891 { 00:14:48.891 "subsystem": "bdev", 00:14:48.891 "config": [ 00:14:48.891 { 00:14:48.891 "params": { 00:14:48.891 "trtype": "pcie", 00:14:48.891 "traddr": "0000:00:10.0", 00:14:48.891 "name": "Nvme0" 00:14:48.891 }, 00:14:48.891 "method": "bdev_nvme_attach_controller" 00:14:48.891 }, 00:14:48.891 { 00:14:48.891 "method": "bdev_wait_for_examine" 00:14:48.891 } 00:14:48.891 ] 00:14:48.891 } 00:14:48.891 ] 00:14:48.891 } 00:14:48.891 [2024-05-15 09:08:01.261033] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:48.891 [2024-05-15 09:08:01.261301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62076 ] 00:14:49.149 [2024-05-15 09:08:01.407299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.149 [2024-05-15 09:08:01.513933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.665  Copying: 48/48 [kB] (average 46 MBps) 00:14:49.665 00:14:49.665 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:14:49.665 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:49.665 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:49.665 09:08:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:49.665 { 00:14:49.665 "subsystems": [ 00:14:49.665 { 00:14:49.665 "subsystem": "bdev", 00:14:49.665 "config": [ 00:14:49.665 { 00:14:49.665 "params": { 00:14:49.665 "trtype": "pcie", 00:14:49.666 "traddr": "0000:00:10.0", 00:14:49.666 "name": "Nvme0" 00:14:49.666 }, 00:14:49.666 "method": "bdev_nvme_attach_controller" 00:14:49.666 }, 00:14:49.666 { 00:14:49.666 "method": "bdev_wait_for_examine" 00:14:49.666 } 00:14:49.666 ] 00:14:49.666 } 00:14:49.666 ] 00:14:49.666 } 00:14:49.666 [2024-05-15 09:08:01.938896] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:49.666 [2024-05-15 09:08:01.939227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62090 ] 00:14:49.666 [2024-05-15 09:08:02.086572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.924 [2024-05-15 09:08:02.225247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.196  Copying: 48/48 [kB] (average 46 MBps) 00:14:50.196 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:50.196 09:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 { 00:14:50.196 "subsystems": [ 00:14:50.196 { 00:14:50.196 "subsystem": "bdev", 00:14:50.196 "config": [ 00:14:50.196 { 00:14:50.196 "params": { 00:14:50.196 "trtype": "pcie", 00:14:50.196 "traddr": "0000:00:10.0", 00:14:50.196 "name": "Nvme0" 00:14:50.196 }, 00:14:50.196 "method": "bdev_nvme_attach_controller" 00:14:50.196 }, 00:14:50.196 { 00:14:50.196 "method": "bdev_wait_for_examine" 00:14:50.196 } 00:14:50.196 ] 00:14:50.196 } 00:14:50.196 ] 00:14:50.196 } 00:14:50.454 [2024-05-15 09:08:02.654067] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:50.454 [2024-05-15 09:08:02.654430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:14:50.454 [2024-05-15 09:08:02.797413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.711 [2024-05-15 09:08:02.908057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.968  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:50.968 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:14:50.968 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:14:51.530 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:14:51.530 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:51.530 09:08:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 { 00:14:51.530 "subsystems": [ 00:14:51.530 { 00:14:51.530 "subsystem": "bdev", 00:14:51.530 "config": [ 00:14:51.530 { 00:14:51.530 "params": { 00:14:51.530 "trtype": "pcie", 00:14:51.530 "traddr": "0000:00:10.0", 00:14:51.530 "name": "Nvme0" 00:14:51.530 }, 00:14:51.530 "method": "bdev_nvme_attach_controller" 00:14:51.530 }, 00:14:51.530 { 00:14:51.530 "method": "bdev_wait_for_examine" 00:14:51.530 } 00:14:51.530 ] 00:14:51.530 } 00:14:51.530 ] 00:14:51.530 } 00:14:51.530 [2024-05-15 09:08:03.781420] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:51.530 [2024-05-15 09:08:03.782002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62124 ] 00:14:51.530 [2024-05-15 09:08:03.929043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.788 [2024-05-15 09:08:04.053176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.045  Copying: 48/48 [kB] (average 46 MBps) 00:14:52.045 00:14:52.045 09:08:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:14:52.045 09:08:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:14:52.045 09:08:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:52.045 09:08:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:52.045 { 00:14:52.045 "subsystems": [ 00:14:52.045 { 00:14:52.045 "subsystem": "bdev", 00:14:52.045 "config": [ 00:14:52.045 { 00:14:52.045 "params": { 00:14:52.045 "trtype": "pcie", 00:14:52.045 "traddr": "0000:00:10.0", 00:14:52.045 "name": "Nvme0" 00:14:52.045 }, 00:14:52.045 "method": "bdev_nvme_attach_controller" 00:14:52.045 }, 00:14:52.045 { 00:14:52.045 "method": "bdev_wait_for_examine" 00:14:52.045 } 00:14:52.045 ] 00:14:52.045 } 00:14:52.045 ] 00:14:52.045 } 00:14:52.045 [2024-05-15 09:08:04.480302] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:52.045 [2024-05-15 09:08:04.480720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62143 ] 00:14:52.304 [2024-05-15 09:08:04.627321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.588 [2024-05-15 09:08:04.761243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.859  Copying: 48/48 [kB] (average 46 MBps) 00:14:52.859 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:52.859 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:52.859 [2024-05-15 09:08:05.177373] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:52.859 [2024-05-15 09:08:05.177737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62159 ] 00:14:52.859 { 00:14:52.859 "subsystems": [ 00:14:52.859 { 00:14:52.859 "subsystem": "bdev", 00:14:52.859 "config": [ 00:14:52.859 { 00:14:52.859 "params": { 00:14:52.859 "trtype": "pcie", 00:14:52.859 "traddr": "0000:00:10.0", 00:14:52.859 "name": "Nvme0" 00:14:52.859 }, 00:14:52.859 "method": "bdev_nvme_attach_controller" 00:14:52.859 }, 00:14:52.859 { 00:14:52.859 "method": "bdev_wait_for_examine" 00:14:52.859 } 00:14:52.859 ] 00:14:52.859 } 00:14:52.859 ] 00:14:52.859 } 00:14:53.117 [2024-05-15 09:08:05.316732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.117 [2024-05-15 09:08:05.437692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.375  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:53.375 00:14:53.633 ************************************ 00:14:53.633 END TEST dd_rw 00:14:53.633 ************************************ 00:14:53.633 00:14:53.633 real 0m15.519s 00:14:53.633 user 0m11.273s 00:14:53.633 sys 0m5.244s 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:53.633 ************************************ 00:14:53.633 START TEST dd_rw_offset 00:14:53.633 ************************************ 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # basic_offset 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=6gc1uorgslr1jpdsdw4n10duxo3wrjs2jw1d41zemdsp25v3qzfcnhtgjh86ayqf6m6bq0bhip2v74ggh3oao47rrd88rf0fvan7mb4gyo9wddi4kyodpjwyla83hls7lvkveefos93w10xzbfdvp1zjb84y88h2yucwec6d4bpey0cqy6vo8ixq4zupy7nwabf4ee2vehr7ewm2jegp71sxk15zyseonuuh1f88g5c78tc9abfh1h3i4a6ssgjhkoez3f1fw8k857u2mtwiv9woedn03u59sbni4a2535sjlb5zvx9ao4wgtjo24n3ia49r71orxmih4x9nlh8p2v3ngb08id6tt5zx9flu8byh7j967fd3c70pav87oit2jllsufww31tcgmytj7u1np9pij1n2wahdmqk71m64jh7b6zfae2o66u40t0l8kn8haxoyfacyl0wu7uhjc1e99au0seedflzfysc1tycb927q2q0jltbfm8y8j0n8htvshi1pgc7ywe5hxzmw3amhuu7ost2621vpigpzu56g17gmofkrboid7o1iofih9h08vz9pn407k7a68gm5izdlvi7iid4k8yw2l6438jjqm0s1amfvz9ulhipvk028eyovbow54qrufjzk2bi64t4zl2jsn3ph3s4e2cokmrmxh3rpny74w44nomunpgeq01tjrs05wv6unobvu7yo3uf3xanqlp47pda6mzz1cgs836od4nrnwciat47iw34e2iyiepybg97kb27crg69nz6wlpegqxlviwoa7itiq8oroh94xfss09oje1n2rar3zf513ijf14hth8x7pha2l1negglywt8sinf0my4bi44ry6mppkzmxlun6i17a80ji18asrm7l0qqz7y2xgk78alffdivwy4h8ey57ozit7u2wa7sr2lw93tzaj8tovwmecdagsywgr5mtnu7qmrollgyczd17shemgmefyfekwgzmpupo8kivzuc3n20s3c9ogxlbdllv79ocftebm3me4z8hjkpnhq6t9mgqjkdlgs45mxe4hwpyfwhhvlg7e68151urmdt2l12xo9jew4ny8vgl98z3pt8fasa5dgj6li2nk1n585phizaxv9gcneqgu0qztfq49ene7s7mpbi6pq0z12a12jvjc0awaamc68pula7wt8p0zv0c5r3vkd5szyontxtsqmsmuef00j7k23ijfh9kibnmd1vlm522f0u1r4r7mul1d4fq8uwwsd0lqa9z0se2784ip3n1f2p27r9y5hacmivxwukcwj2k36z254dhjilfrefyju5d4i73kwqqv9t3vbilzzhhbe9kb29mgiojlwrq7ghumjq4hwk0s8fm3h5bo9y5lwmfjc10nefo3zhfnb2weh82lg0hs4c5p4zm54gbg5tpxl8kxw7t79jn56is6a15tvr36y4cm0zb45wj5cdqlwyt0tj2l1a3dlv6cc1w1rmcd462rzhj2npmw9d7ciwbeq4wmiq7l6x77zkz4acg6iyikhvyoeyu7erzsk9hstata4vv41lda66477ny58ikjceb3ffbv7vfj8w54taq3n9kkcduxntwc0weplnnqubfxsk4bcpmrnwal9jix40z5vg0sch1fa570tppa4aavxjvc8oevtfhl4b6k28eu8x4iz39ai4suwrwwlkt3v4tkcf1ogi95ju7ds2vb7ztq35xmmp3ebsl8oor1qmzxxorab5sezxcb9fsyc658dsaopze5a9yiirw7o93uu9e1x26o6izmsu7cug5d402jo2ilsqu8vljvbho4oqrleual20o00voohwxhs5ys7e1vc3te77lj26bwf15kk51av5p1n0j7z67cexc2wimu0wb3g6eifpy099hl1ub7ux0oj0we6sgdoe44kzs696qk2d1f49bpf7iwr5w8pusfay0qz2l9jufut0qhh46z5hiy57efksbxrwamec6us1ans9pqztyg4f1l0jcle7lp9flgvkdkltlypr17n31i16x6s6mg5vplcf6xqgiipk77p8mdcvkowvom4ssk03iq5og4z5y7ceo2tc1r570kjkc7u6adghtpt4dmd5bq39n81xtg1e5kejl4xr9uudc78y2adkxgz8sz01bm1ebwidgw9n3t318jjiu7o9q2ljpujy5g3iqoizk5637vpu83ilg36powhufia73i40bt4x8fzjl83xtsixeipylomcuwnj8udvd9uohpo7kfd0hl261ldt03vbdaklm76wvmcgpfo4ubucntdod3b1113i8utp2x5myvunszmvxly0jl1xyao1asvt4psdvt9b2va6w9zp4qlq1qy4rnlcbbpc7pywkw1kg9ubw4obdvm08mph8de7a73flzd5z6v43dlwtijcnb6uad4te5kfz9a7w3cje9f0pjhflr1mvne1nbdse8eiya8z2ffmbsslumbal76bm30qvwcnmm8k96k2dise6h0ziwvxdln7mz7n4undarlwtnvr90x5z95m9lptqk0onysu6jn2pxirb37vsrpmglc23l8blxw85hjfylz8cud1e6wghs0bamy22x0x1uqyy3hrhb50eqbqtvh42xk14e5fxywwn3ggj0cg6hsy18lu3l4c2z50wm8lvifdcry54r2fppocq3rs98di9umlk1c1f5mcfks3ev52psb6s7nwu2b3newzaaacmw94ufvbtkz1ezx8792udt5c19xglmiu46ex7hyte5mcir6bamkvv9hv6kqkr0tk00fploylneb1l3nfxl5tmv1hhn5ix25nlu1ocw8t08vt2a26fyz7g40nofjc790fa634l7kld64tcyyq19uhygd9a51zrp1uwxkv1em02bc95bpo9t0k2ec1bssr0txx6kfa5w4wzp22e80willwfepy7yif8unf15ecgy7hjui5hmb4qewsecoyriv0k2sjnb6zjy29cnydazqydy2oaus7oe1rijxac9m2xms9m5zbhj3x2ohdg1arofl9lgv4k38kdjetuqo3zpc45pj5wgrrny8wjvokiss3afs0v5ukublg6k8kclyjiqr4m96o7p8kkdawmw9b2niziauujqejld7ows2wu5uczq4zxl9i7x5kkavqu5tgvbf2jo77jheby0xgv1olm3p8amxbgy3af85jwi0rhgng4xpmc4dlc65oxyfg1ujqtj5a70hp1xm60pw2bi7llm5epma5jhwxw5cfqukxipavb7n153ir22088v1nnrciqmtj890x1j3481wnr3ygi3zsgya1o1a1i79sj543qicy6t25240qsm7wptnak988bmwddbo9p0enjqgvyyppb04dxfkxz6nn4tura48k6odaq28lw69y2gb5om6roltobufx9to4cygxtdbgqtiiqsk15vyi0nxxty7blpnewc1yriehxvf9ddll2se4i1tm3h85t49jtn73p9i0322bqd1eawm9l47qs93ngrkfnsilij7tbri5njxf1wh8mmrs28kxem9vpqlf5zabdqno6xp06vuumh6sxzo68fobrooq8n7khm756vwuk8l58uhbfk3v79zj9k4ytn8nf8qxzbujjtm10r2c1ijo9dt1hlc97yl2x0t7dawfzddj7tgt2cqg78k70v29rx14q0npuuledt7zk6w19m6k85alkyrckju5wesp68c9xzwpzjx5w89ejayy7du284b43h5lkgw2ky6l88cgxlp3guiif8wksy6sbagw1b02xzli9cxld7exfgrz3rmphpx3cttehnbye1b9kzqatdjz35wa4nyrn4eku6fk0sjrc2hw4dmbp9j1wif27j4hqhru62pqxbrozv4dn53gnh9pzfz6y80f6m9vleqskx6ryjslu58f20nzb3w5vsqakcw5pphep1a9r8rhqiscev13xildorkfyiyzyfm9uthhexjawffxdu1fi9thg08ucqrgsrdzfs0yf0sn2a37rpx6meq14krc4xnigb2rgko4pyqnfsy4xh7 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:14:53.633 09:08:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:14:53.633 { 00:14:53.633 "subsystems": [ 00:14:53.633 { 00:14:53.633 "subsystem": "bdev", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "params": { 00:14:53.633 "trtype": "pcie", 00:14:53.633 "traddr": "0000:00:10.0", 00:14:53.633 "name": "Nvme0" 00:14:53.633 }, 00:14:53.633 "method": "bdev_nvme_attach_controller" 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_wait_for_examine" 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 } 00:14:53.634 [2024-05-15 09:08:05.981761] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:53.634 [2024-05-15 09:08:05.982177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:14:53.891 [2024-05-15 09:08:06.128927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.891 [2024-05-15 09:08:06.239517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.407  Copying: 4096/4096 [B] (average 4000 kBps) 00:14:54.407 00:14:54.407 09:08:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:14:54.407 09:08:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:14:54.407 09:08:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:14:54.407 09:08:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:14:54.407 [2024-05-15 09:08:06.661708] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:54.407 [2024-05-15 09:08:06.662004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62208 ] 00:14:54.407 { 00:14:54.407 "subsystems": [ 00:14:54.407 { 00:14:54.407 "subsystem": "bdev", 00:14:54.407 "config": [ 00:14:54.407 { 00:14:54.407 "params": { 00:14:54.407 "trtype": "pcie", 00:14:54.407 "traddr": "0000:00:10.0", 00:14:54.407 "name": "Nvme0" 00:14:54.407 }, 00:14:54.407 "method": "bdev_nvme_attach_controller" 00:14:54.407 }, 00:14:54.407 { 00:14:54.407 "method": "bdev_wait_for_examine" 00:14:54.407 } 00:14:54.407 ] 00:14:54.407 } 00:14:54.407 ] 00:14:54.407 } 00:14:54.407 [2024-05-15 09:08:06.795968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.664 [2024-05-15 09:08:06.902986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.938  Copying: 4096/4096 [B] (average 4000 kBps) 00:14:54.938 00:14:54.938 09:08:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 6gc1uorgslr1jpdsdw4n10duxo3wrjs2jw1d41zemdsp25v3qzfcnhtgjh86ayqf6m6bq0bhip2v74ggh3oao47rrd88rf0fvan7mb4gyo9wddi4kyodpjwyla83hls7lvkveefos93w10xzbfdvp1zjb84y88h2yucwec6d4bpey0cqy6vo8ixq4zupy7nwabf4ee2vehr7ewm2jegp71sxk15zyseonuuh1f88g5c78tc9abfh1h3i4a6ssgjhkoez3f1fw8k857u2mtwiv9woedn03u59sbni4a2535sjlb5zvx9ao4wgtjo24n3ia49r71orxmih4x9nlh8p2v3ngb08id6tt5zx9flu8byh7j967fd3c70pav87oit2jllsufww31tcgmytj7u1np9pij1n2wahdmqk71m64jh7b6zfae2o66u40t0l8kn8haxoyfacyl0wu7uhjc1e99au0seedflzfysc1tycb927q2q0jltbfm8y8j0n8htvshi1pgc7ywe5hxzmw3amhuu7ost2621vpigpzu56g17gmofkrboid7o1iofih9h08vz9pn407k7a68gm5izdlvi7iid4k8yw2l6438jjqm0s1amfvz9ulhipvk028eyovbow54qrufjzk2bi64t4zl2jsn3ph3s4e2cokmrmxh3rpny74w44nomunpgeq01tjrs05wv6unobvu7yo3uf3xanqlp47pda6mzz1cgs836od4nrnwciat47iw34e2iyiepybg97kb27crg69nz6wlpegqxlviwoa7itiq8oroh94xfss09oje1n2rar3zf513ijf14hth8x7pha2l1negglywt8sinf0my4bi44ry6mppkzmxlun6i17a80ji18asrm7l0qqz7y2xgk78alffdivwy4h8ey57ozit7u2wa7sr2lw93tzaj8tovwmecdagsywgr5mtnu7qmrollgyczd17shemgmefyfekwgzmpupo8kivzuc3n20s3c9ogxlbdllv79ocftebm3me4z8hjkpnhq6t9mgqjkdlgs45mxe4hwpyfwhhvlg7e68151urmdt2l12xo9jew4ny8vgl98z3pt8fasa5dgj6li2nk1n585phizaxv9gcneqgu0qztfq49ene7s7mpbi6pq0z12a12jvjc0awaamc68pula7wt8p0zv0c5r3vkd5szyontxtsqmsmuef00j7k23ijfh9kibnmd1vlm522f0u1r4r7mul1d4fq8uwwsd0lqa9z0se2784ip3n1f2p27r9y5hacmivxwukcwj2k36z254dhjilfrefyju5d4i73kwqqv9t3vbilzzhhbe9kb29mgiojlwrq7ghumjq4hwk0s8fm3h5bo9y5lwmfjc10nefo3zhfnb2weh82lg0hs4c5p4zm54gbg5tpxl8kxw7t79jn56is6a15tvr36y4cm0zb45wj5cdqlwyt0tj2l1a3dlv6cc1w1rmcd462rzhj2npmw9d7ciwbeq4wmiq7l6x77zkz4acg6iyikhvyoeyu7erzsk9hstata4vv41lda66477ny58ikjceb3ffbv7vfj8w54taq3n9kkcduxntwc0weplnnqubfxsk4bcpmrnwal9jix40z5vg0sch1fa570tppa4aavxjvc8oevtfhl4b6k28eu8x4iz39ai4suwrwwlkt3v4tkcf1ogi95ju7ds2vb7ztq35xmmp3ebsl8oor1qmzxxorab5sezxcb9fsyc658dsaopze5a9yiirw7o93uu9e1x26o6izmsu7cug5d402jo2ilsqu8vljvbho4oqrleual20o00voohwxhs5ys7e1vc3te77lj26bwf15kk51av5p1n0j7z67cexc2wimu0wb3g6eifpy099hl1ub7ux0oj0we6sgdoe44kzs696qk2d1f49bpf7iwr5w8pusfay0qz2l9jufut0qhh46z5hiy57efksbxrwamec6us1ans9pqztyg4f1l0jcle7lp9flgvkdkltlypr17n31i16x6s6mg5vplcf6xqgiipk77p8mdcvkowvom4ssk03iq5og4z5y7ceo2tc1r570kjkc7u6adghtpt4dmd5bq39n81xtg1e5kejl4xr9uudc78y2adkxgz8sz01bm1ebwidgw9n3t318jjiu7o9q2ljpujy5g3iqoizk5637vpu83ilg36powhufia73i40bt4x8fzjl83xtsixeipylomcuwnj8udvd9uohpo7kfd0hl261ldt03vbdaklm76wvmcgpfo4ubucntdod3b1113i8utp2x5myvunszmvxly0jl1xyao1asvt4psdvt9b2va6w9zp4qlq1qy4rnlcbbpc7pywkw1kg9ubw4obdvm08mph8de7a73flzd5z6v43dlwtijcnb6uad4te5kfz9a7w3cje9f0pjhflr1mvne1nbdse8eiya8z2ffmbsslumbal76bm30qvwcnmm8k96k2dise6h0ziwvxdln7mz7n4undarlwtnvr90x5z95m9lptqk0onysu6jn2pxirb37vsrpmglc23l8blxw85hjfylz8cud1e6wghs0bamy22x0x1uqyy3hrhb50eqbqtvh42xk14e5fxywwn3ggj0cg6hsy18lu3l4c2z50wm8lvifdcry54r2fppocq3rs98di9umlk1c1f5mcfks3ev52psb6s7nwu2b3newzaaacmw94ufvbtkz1ezx8792udt5c19xglmiu46ex7hyte5mcir6bamkvv9hv6kqkr0tk00fploylneb1l3nfxl5tmv1hhn5ix25nlu1ocw8t08vt2a26fyz7g40nofjc790fa634l7kld64tcyyq19uhygd9a51zrp1uwxkv1em02bc95bpo9t0k2ec1bssr0txx6kfa5w4wzp22e80willwfepy7yif8unf15ecgy7hjui5hmb4qewsecoyriv0k2sjnb6zjy29cnydazqydy2oaus7oe1rijxac9m2xms9m5zbhj3x2ohdg1arofl9lgv4k38kdjetuqo3zpc45pj5wgrrny8wjvokiss3afs0v5ukublg6k8kclyjiqr4m96o7p8kkdawmw9b2niziauujqejld7ows2wu5uczq4zxl9i7x5kkavqu5tgvbf2jo77jheby0xgv1olm3p8amxbgy3af85jwi0rhgng4xpmc4dlc65oxyfg1ujqtj5a70hp1xm60pw2bi7llm5epma5jhwxw5cfqukxipavb7n153ir22088v1nnrciqmtj890x1j3481wnr3ygi3zsgya1o1a1i79sj543qicy6t25240qsm7wptnak988bmwddbo9p0enjqgvyyppb04dxfkxz6nn4tura48k6odaq28lw69y2gb5om6roltobufx9to4cygxtdbgqtiiqsk15vyi0nxxty7blpnewc1yriehxvf9ddll2se4i1tm3h85t49jtn73p9i0322bqd1eawm9l47qs93ngrkfnsilij7tbri5njxf1wh8mmrs28kxem9vpqlf5zabdqno6xp06vuumh6sxzo68fobrooq8n7khm756vwuk8l58uhbfk3v79zj9k4ytn8nf8qxzbujjtm10r2c1ijo9dt1hlc97yl2x0t7dawfzddj7tgt2cqg78k70v29rx14q0npuuledt7zk6w19m6k85alkyrckju5wesp68c9xzwpzjx5w89ejayy7du284b43h5lkgw2ky6l88cgxlp3guiif8wksy6sbagw1b02xzli9cxld7exfgrz3rmphpx3cttehnbye1b9kzqatdjz35wa4nyrn4eku6fk0sjrc2hw4dmbp9j1wif27j4hqhru62pqxbrozv4dn53gnh9pzfz6y80f6m9vleqskx6ryjslu58f20nzb3w5vsqakcw5pphep1a9r8rhqiscev13xildorkfyiyzyfm9uthhexjawffxdu1fi9thg08ucqrgsrdzfs0yf0sn2a37rpx6meq14krc4xnigb2rgko4pyqnfsy4xh7 == \6\g\c\1\u\o\r\g\s\l\r\1\j\p\d\s\d\w\4\n\1\0\d\u\x\o\3\w\r\j\s\2\j\w\1\d\4\1\z\e\m\d\s\p\2\5\v\3\q\z\f\c\n\h\t\g\j\h\8\6\a\y\q\f\6\m\6\b\q\0\b\h\i\p\2\v\7\4\g\g\h\3\o\a\o\4\7\r\r\d\8\8\r\f\0\f\v\a\n\7\m\b\4\g\y\o\9\w\d\d\i\4\k\y\o\d\p\j\w\y\l\a\8\3\h\l\s\7\l\v\k\v\e\e\f\o\s\9\3\w\1\0\x\z\b\f\d\v\p\1\z\j\b\8\4\y\8\8\h\2\y\u\c\w\e\c\6\d\4\b\p\e\y\0\c\q\y\6\v\o\8\i\x\q\4\z\u\p\y\7\n\w\a\b\f\4\e\e\2\v\e\h\r\7\e\w\m\2\j\e\g\p\7\1\s\x\k\1\5\z\y\s\e\o\n\u\u\h\1\f\8\8\g\5\c\7\8\t\c\9\a\b\f\h\1\h\3\i\4\a\6\s\s\g\j\h\k\o\e\z\3\f\1\f\w\8\k\8\5\7\u\2\m\t\w\i\v\9\w\o\e\d\n\0\3\u\5\9\s\b\n\i\4\a\2\5\3\5\s\j\l\b\5\z\v\x\9\a\o\4\w\g\t\j\o\2\4\n\3\i\a\4\9\r\7\1\o\r\x\m\i\h\4\x\9\n\l\h\8\p\2\v\3\n\g\b\0\8\i\d\6\t\t\5\z\x\9\f\l\u\8\b\y\h\7\j\9\6\7\f\d\3\c\7\0\p\a\v\8\7\o\i\t\2\j\l\l\s\u\f\w\w\3\1\t\c\g\m\y\t\j\7\u\1\n\p\9\p\i\j\1\n\2\w\a\h\d\m\q\k\7\1\m\6\4\j\h\7\b\6\z\f\a\e\2\o\6\6\u\4\0\t\0\l\8\k\n\8\h\a\x\o\y\f\a\c\y\l\0\w\u\7\u\h\j\c\1\e\9\9\a\u\0\s\e\e\d\f\l\z\f\y\s\c\1\t\y\c\b\9\2\7\q\2\q\0\j\l\t\b\f\m\8\y\8\j\0\n\8\h\t\v\s\h\i\1\p\g\c\7\y\w\e\5\h\x\z\m\w\3\a\m\h\u\u\7\o\s\t\2\6\2\1\v\p\i\g\p\z\u\5\6\g\1\7\g\m\o\f\k\r\b\o\i\d\7\o\1\i\o\f\i\h\9\h\0\8\v\z\9\p\n\4\0\7\k\7\a\6\8\g\m\5\i\z\d\l\v\i\7\i\i\d\4\k\8\y\w\2\l\6\4\3\8\j\j\q\m\0\s\1\a\m\f\v\z\9\u\l\h\i\p\v\k\0\2\8\e\y\o\v\b\o\w\5\4\q\r\u\f\j\z\k\2\b\i\6\4\t\4\z\l\2\j\s\n\3\p\h\3\s\4\e\2\c\o\k\m\r\m\x\h\3\r\p\n\y\7\4\w\4\4\n\o\m\u\n\p\g\e\q\0\1\t\j\r\s\0\5\w\v\6\u\n\o\b\v\u\7\y\o\3\u\f\3\x\a\n\q\l\p\4\7\p\d\a\6\m\z\z\1\c\g\s\8\3\6\o\d\4\n\r\n\w\c\i\a\t\4\7\i\w\3\4\e\2\i\y\i\e\p\y\b\g\9\7\k\b\2\7\c\r\g\6\9\n\z\6\w\l\p\e\g\q\x\l\v\i\w\o\a\7\i\t\i\q\8\o\r\o\h\9\4\x\f\s\s\0\9\o\j\e\1\n\2\r\a\r\3\z\f\5\1\3\i\j\f\1\4\h\t\h\8\x\7\p\h\a\2\l\1\n\e\g\g\l\y\w\t\8\s\i\n\f\0\m\y\4\b\i\4\4\r\y\6\m\p\p\k\z\m\x\l\u\n\6\i\1\7\a\8\0\j\i\1\8\a\s\r\m\7\l\0\q\q\z\7\y\2\x\g\k\7\8\a\l\f\f\d\i\v\w\y\4\h\8\e\y\5\7\o\z\i\t\7\u\2\w\a\7\s\r\2\l\w\9\3\t\z\a\j\8\t\o\v\w\m\e\c\d\a\g\s\y\w\g\r\5\m\t\n\u\7\q\m\r\o\l\l\g\y\c\z\d\1\7\s\h\e\m\g\m\e\f\y\f\e\k\w\g\z\m\p\u\p\o\8\k\i\v\z\u\c\3\n\2\0\s\3\c\9\o\g\x\l\b\d\l\l\v\7\9\o\c\f\t\e\b\m\3\m\e\4\z\8\h\j\k\p\n\h\q\6\t\9\m\g\q\j\k\d\l\g\s\4\5\m\x\e\4\h\w\p\y\f\w\h\h\v\l\g\7\e\6\8\1\5\1\u\r\m\d\t\2\l\1\2\x\o\9\j\e\w\4\n\y\8\v\g\l\9\8\z\3\p\t\8\f\a\s\a\5\d\g\j\6\l\i\2\n\k\1\n\5\8\5\p\h\i\z\a\x\v\9\g\c\n\e\q\g\u\0\q\z\t\f\q\4\9\e\n\e\7\s\7\m\p\b\i\6\p\q\0\z\1\2\a\1\2\j\v\j\c\0\a\w\a\a\m\c\6\8\p\u\l\a\7\w\t\8\p\0\z\v\0\c\5\r\3\v\k\d\5\s\z\y\o\n\t\x\t\s\q\m\s\m\u\e\f\0\0\j\7\k\2\3\i\j\f\h\9\k\i\b\n\m\d\1\v\l\m\5\2\2\f\0\u\1\r\4\r\7\m\u\l\1\d\4\f\q\8\u\w\w\s\d\0\l\q\a\9\z\0\s\e\2\7\8\4\i\p\3\n\1\f\2\p\2\7\r\9\y\5\h\a\c\m\i\v\x\w\u\k\c\w\j\2\k\3\6\z\2\5\4\d\h\j\i\l\f\r\e\f\y\j\u\5\d\4\i\7\3\k\w\q\q\v\9\t\3\v\b\i\l\z\z\h\h\b\e\9\k\b\2\9\m\g\i\o\j\l\w\r\q\7\g\h\u\m\j\q\4\h\w\k\0\s\8\f\m\3\h\5\b\o\9\y\5\l\w\m\f\j\c\1\0\n\e\f\o\3\z\h\f\n\b\2\w\e\h\8\2\l\g\0\h\s\4\c\5\p\4\z\m\5\4\g\b\g\5\t\p\x\l\8\k\x\w\7\t\7\9\j\n\5\6\i\s\6\a\1\5\t\v\r\3\6\y\4\c\m\0\z\b\4\5\w\j\5\c\d\q\l\w\y\t\0\t\j\2\l\1\a\3\d\l\v\6\c\c\1\w\1\r\m\c\d\4\6\2\r\z\h\j\2\n\p\m\w\9\d\7\c\i\w\b\e\q\4\w\m\i\q\7\l\6\x\7\7\z\k\z\4\a\c\g\6\i\y\i\k\h\v\y\o\e\y\u\7\e\r\z\s\k\9\h\s\t\a\t\a\4\v\v\4\1\l\d\a\6\6\4\7\7\n\y\5\8\i\k\j\c\e\b\3\f\f\b\v\7\v\f\j\8\w\5\4\t\a\q\3\n\9\k\k\c\d\u\x\n\t\w\c\0\w\e\p\l\n\n\q\u\b\f\x\s\k\4\b\c\p\m\r\n\w\a\l\9\j\i\x\4\0\z\5\v\g\0\s\c\h\1\f\a\5\7\0\t\p\p\a\4\a\a\v\x\j\v\c\8\o\e\v\t\f\h\l\4\b\6\k\2\8\e\u\8\x\4\i\z\3\9\a\i\4\s\u\w\r\w\w\l\k\t\3\v\4\t\k\c\f\1\o\g\i\9\5\j\u\7\d\s\2\v\b\7\z\t\q\3\5\x\m\m\p\3\e\b\s\l\8\o\o\r\1\q\m\z\x\x\o\r\a\b\5\s\e\z\x\c\b\9\f\s\y\c\6\5\8\d\s\a\o\p\z\e\5\a\9\y\i\i\r\w\7\o\9\3\u\u\9\e\1\x\2\6\o\6\i\z\m\s\u\7\c\u\g\5\d\4\0\2\j\o\2\i\l\s\q\u\8\v\l\j\v\b\h\o\4\o\q\r\l\e\u\a\l\2\0\o\0\0\v\o\o\h\w\x\h\s\5\y\s\7\e\1\v\c\3\t\e\7\7\l\j\2\6\b\w\f\1\5\k\k\5\1\a\v\5\p\1\n\0\j\7\z\6\7\c\e\x\c\2\w\i\m\u\0\w\b\3\g\6\e\i\f\p\y\0\9\9\h\l\1\u\b\7\u\x\0\o\j\0\w\e\6\s\g\d\o\e\4\4\k\z\s\6\9\6\q\k\2\d\1\f\4\9\b\p\f\7\i\w\r\5\w\8\p\u\s\f\a\y\0\q\z\2\l\9\j\u\f\u\t\0\q\h\h\4\6\z\5\h\i\y\5\7\e\f\k\s\b\x\r\w\a\m\e\c\6\u\s\1\a\n\s\9\p\q\z\t\y\g\4\f\1\l\0\j\c\l\e\7\l\p\9\f\l\g\v\k\d\k\l\t\l\y\p\r\1\7\n\3\1\i\1\6\x\6\s\6\m\g\5\v\p\l\c\f\6\x\q\g\i\i\p\k\7\7\p\8\m\d\c\v\k\o\w\v\o\m\4\s\s\k\0\3\i\q\5\o\g\4\z\5\y\7\c\e\o\2\t\c\1\r\5\7\0\k\j\k\c\7\u\6\a\d\g\h\t\p\t\4\d\m\d\5\b\q\3\9\n\8\1\x\t\g\1\e\5\k\e\j\l\4\x\r\9\u\u\d\c\7\8\y\2\a\d\k\x\g\z\8\s\z\0\1\b\m\1\e\b\w\i\d\g\w\9\n\3\t\3\1\8\j\j\i\u\7\o\9\q\2\l\j\p\u\j\y\5\g\3\i\q\o\i\z\k\5\6\3\7\v\p\u\8\3\i\l\g\3\6\p\o\w\h\u\f\i\a\7\3\i\4\0\b\t\4\x\8\f\z\j\l\8\3\x\t\s\i\x\e\i\p\y\l\o\m\c\u\w\n\j\8\u\d\v\d\9\u\o\h\p\o\7\k\f\d\0\h\l\2\6\1\l\d\t\0\3\v\b\d\a\k\l\m\7\6\w\v\m\c\g\p\f\o\4\u\b\u\c\n\t\d\o\d\3\b\1\1\1\3\i\8\u\t\p\2\x\5\m\y\v\u\n\s\z\m\v\x\l\y\0\j\l\1\x\y\a\o\1\a\s\v\t\4\p\s\d\v\t\9\b\2\v\a\6\w\9\z\p\4\q\l\q\1\q\y\4\r\n\l\c\b\b\p\c\7\p\y\w\k\w\1\k\g\9\u\b\w\4\o\b\d\v\m\0\8\m\p\h\8\d\e\7\a\7\3\f\l\z\d\5\z\6\v\4\3\d\l\w\t\i\j\c\n\b\6\u\a\d\4\t\e\5\k\f\z\9\a\7\w\3\c\j\e\9\f\0\p\j\h\f\l\r\1\m\v\n\e\1\n\b\d\s\e\8\e\i\y\a\8\z\2\f\f\m\b\s\s\l\u\m\b\a\l\7\6\b\m\3\0\q\v\w\c\n\m\m\8\k\9\6\k\2\d\i\s\e\6\h\0\z\i\w\v\x\d\l\n\7\m\z\7\n\4\u\n\d\a\r\l\w\t\n\v\r\9\0\x\5\z\9\5\m\9\l\p\t\q\k\0\o\n\y\s\u\6\j\n\2\p\x\i\r\b\3\7\v\s\r\p\m\g\l\c\2\3\l\8\b\l\x\w\8\5\h\j\f\y\l\z\8\c\u\d\1\e\6\w\g\h\s\0\b\a\m\y\2\2\x\0\x\1\u\q\y\y\3\h\r\h\b\5\0\e\q\b\q\t\v\h\4\2\x\k\1\4\e\5\f\x\y\w\w\n\3\g\g\j\0\c\g\6\h\s\y\1\8\l\u\3\l\4\c\2\z\5\0\w\m\8\l\v\i\f\d\c\r\y\5\4\r\2\f\p\p\o\c\q\3\r\s\9\8\d\i\9\u\m\l\k\1\c\1\f\5\m\c\f\k\s\3\e\v\5\2\p\s\b\6\s\7\n\w\u\2\b\3\n\e\w\z\a\a\a\c\m\w\9\4\u\f\v\b\t\k\z\1\e\z\x\8\7\9\2\u\d\t\5\c\1\9\x\g\l\m\i\u\4\6\e\x\7\h\y\t\e\5\m\c\i\r\6\b\a\m\k\v\v\9\h\v\6\k\q\k\r\0\t\k\0\0\f\p\l\o\y\l\n\e\b\1\l\3\n\f\x\l\5\t\m\v\1\h\h\n\5\i\x\2\5\n\l\u\1\o\c\w\8\t\0\8\v\t\2\a\2\6\f\y\z\7\g\4\0\n\o\f\j\c\7\9\0\f\a\6\3\4\l\7\k\l\d\6\4\t\c\y\y\q\1\9\u\h\y\g\d\9\a\5\1\z\r\p\1\u\w\x\k\v\1\e\m\0\2\b\c\9\5\b\p\o\9\t\0\k\2\e\c\1\b\s\s\r\0\t\x\x\6\k\f\a\5\w\4\w\z\p\2\2\e\8\0\w\i\l\l\w\f\e\p\y\7\y\i\f\8\u\n\f\1\5\e\c\g\y\7\h\j\u\i\5\h\m\b\4\q\e\w\s\e\c\o\y\r\i\v\0\k\2\s\j\n\b\6\z\j\y\2\9\c\n\y\d\a\z\q\y\d\y\2\o\a\u\s\7\o\e\1\r\i\j\x\a\c\9\m\2\x\m\s\9\m\5\z\b\h\j\3\x\2\o\h\d\g\1\a\r\o\f\l\9\l\g\v\4\k\3\8\k\d\j\e\t\u\q\o\3\z\p\c\4\5\p\j\5\w\g\r\r\n\y\8\w\j\v\o\k\i\s\s\3\a\f\s\0\v\5\u\k\u\b\l\g\6\k\8\k\c\l\y\j\i\q\r\4\m\9\6\o\7\p\8\k\k\d\a\w\m\w\9\b\2\n\i\z\i\a\u\u\j\q\e\j\l\d\7\o\w\s\2\w\u\5\u\c\z\q\4\z\x\l\9\i\7\x\5\k\k\a\v\q\u\5\t\g\v\b\f\2\j\o\7\7\j\h\e\b\y\0\x\g\v\1\o\l\m\3\p\8\a\m\x\b\g\y\3\a\f\8\5\j\w\i\0\r\h\g\n\g\4\x\p\m\c\4\d\l\c\6\5\o\x\y\f\g\1\u\j\q\t\j\5\a\7\0\h\p\1\x\m\6\0\p\w\2\b\i\7\l\l\m\5\e\p\m\a\5\j\h\w\x\w\5\c\f\q\u\k\x\i\p\a\v\b\7\n\1\5\3\i\r\2\2\0\8\8\v\1\n\n\r\c\i\q\m\t\j\8\9\0\x\1\j\3\4\8\1\w\n\r\3\y\g\i\3\z\s\g\y\a\1\o\1\a\1\i\7\9\s\j\5\4\3\q\i\c\y\6\t\2\5\2\4\0\q\s\m\7\w\p\t\n\a\k\9\8\8\b\m\w\d\d\b\o\9\p\0\e\n\j\q\g\v\y\y\p\p\b\0\4\d\x\f\k\x\z\6\n\n\4\t\u\r\a\4\8\k\6\o\d\a\q\2\8\l\w\6\9\y\2\g\b\5\o\m\6\r\o\l\t\o\b\u\f\x\9\t\o\4\c\y\g\x\t\d\b\g\q\t\i\i\q\s\k\1\5\v\y\i\0\n\x\x\t\y\7\b\l\p\n\e\w\c\1\y\r\i\e\h\x\v\f\9\d\d\l\l\2\s\e\4\i\1\t\m\3\h\8\5\t\4\9\j\t\n\7\3\p\9\i\0\3\2\2\b\q\d\1\e\a\w\m\9\l\4\7\q\s\9\3\n\g\r\k\f\n\s\i\l\i\j\7\t\b\r\i\5\n\j\x\f\1\w\h\8\m\m\r\s\2\8\k\x\e\m\9\v\p\q\l\f\5\z\a\b\d\q\n\o\6\x\p\0\6\v\u\u\m\h\6\s\x\z\o\6\8\f\o\b\r\o\o\q\8\n\7\k\h\m\7\5\6\v\w\u\k\8\l\5\8\u\h\b\f\k\3\v\7\9\z\j\9\k\4\y\t\n\8\n\f\8\q\x\z\b\u\j\j\t\m\1\0\r\2\c\1\i\j\o\9\d\t\1\h\l\c\9\7\y\l\2\x\0\t\7\d\a\w\f\z\d\d\j\7\t\g\t\2\c\q\g\7\8\k\7\0\v\2\9\r\x\1\4\q\0\n\p\u\u\l\e\d\t\7\z\k\6\w\1\9\m\6\k\8\5\a\l\k\y\r\c\k\j\u\5\w\e\s\p\6\8\c\9\x\z\w\p\z\j\x\5\w\8\9\e\j\a\y\y\7\d\u\2\8\4\b\4\3\h\5\l\k\g\w\2\k\y\6\l\8\8\c\g\x\l\p\3\g\u\i\i\f\8\w\k\s\y\6\s\b\a\g\w\1\b\0\2\x\z\l\i\9\c\x\l\d\7\e\x\f\g\r\z\3\r\m\p\h\p\x\3\c\t\t\e\h\n\b\y\e\1\b\9\k\z\q\a\t\d\j\z\3\5\w\a\4\n\y\r\n\4\e\k\u\6\f\k\0\s\j\r\c\2\h\w\4\d\m\b\p\9\j\1\w\i\f\2\7\j\4\h\q\h\r\u\6\2\p\q\x\b\r\o\z\v\4\d\n\5\3\g\n\h\9\p\z\f\z\6\y\8\0\f\6\m\9\v\l\e\q\s\k\x\6\r\y\j\s\l\u\5\8\f\2\0\n\z\b\3\w\5\v\s\q\a\k\c\w\5\p\p\h\e\p\1\a\9\r\8\r\h\q\i\s\c\e\v\1\3\x\i\l\d\o\r\k\f\y\i\y\z\y\f\m\9\u\t\h\h\e\x\j\a\w\f\f\x\d\u\1\f\i\9\t\h\g\0\8\u\c\q\r\g\s\r\d\z\f\s\0\y\f\0\s\n\2\a\3\7\r\p\x\6\m\e\q\1\4\k\r\c\4\x\n\i\g\b\2\r\g\k\o\4\p\y\q\n\f\s\y\4\x\h\7 ]] 00:14:54.939 00:14:54.939 real 0m1.386s 00:14:54.939 user 0m0.946s 00:14:54.939 sys 0m0.563s 00:14:54.939 ************************************ 00:14:54.939 END TEST dd_rw_offset 00:14:54.939 ************************************ 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:14:54.939 09:08:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:54.939 [2024-05-15 09:08:07.353057] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:54.939 [2024-05-15 09:08:07.353368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62238 ] 00:14:54.939 { 00:14:54.939 "subsystems": [ 00:14:54.939 { 00:14:54.939 "subsystem": "bdev", 00:14:54.939 "config": [ 00:14:54.939 { 00:14:54.939 "params": { 00:14:54.939 "trtype": "pcie", 00:14:54.939 "traddr": "0000:00:10.0", 00:14:54.939 "name": "Nvme0" 00:14:54.939 }, 00:14:54.939 "method": "bdev_nvme_attach_controller" 00:14:54.939 }, 00:14:54.939 { 00:14:54.939 "method": "bdev_wait_for_examine" 00:14:54.939 } 00:14:54.939 ] 00:14:54.939 } 00:14:54.939 ] 00:14:54.939 } 00:14:55.197 [2024-05-15 09:08:07.490494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.197 [2024-05-15 09:08:07.611165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.714  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:55.714 00:14:55.714 09:08:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:55.714 ************************************ 00:14:55.714 END TEST spdk_dd_basic_rw 00:14:55.714 ************************************ 00:14:55.714 00:14:55.714 real 0m18.757s 00:14:55.714 user 0m13.334s 00:14:55.714 sys 0m6.450s 00:14:55.714 09:08:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:55.714 09:08:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:14:55.714 09:08:08 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:14:55.714 09:08:08 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:55.714 09:08:08 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:55.714 09:08:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:55.714 ************************************ 00:14:55.714 START TEST spdk_dd_posix 00:14:55.714 ************************************ 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:14:55.714 * Looking for test storage... 00:14:55.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:14:55.714 * First test run, liburing in use 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:55.714 ************************************ 00:14:55.714 START TEST dd_flag_append 00:14:55.714 ************************************ 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # append 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=wsd849tdb6161mp6pdmsm8vtmx9ge9y0 00:14:55.714 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:14:55.972 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:14:55.972 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:14:55.972 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=d2t6p3pp4znbxbkmw3f51t0z82ff7em1 00:14:55.972 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s wsd849tdb6161mp6pdmsm8vtmx9ge9y0 00:14:55.972 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s d2t6p3pp4znbxbkmw3f51t0z82ff7em1 00:14:55.972 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:14:55.972 [2024-05-15 09:08:08.212784] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:55.972 [2024-05-15 09:08:08.213191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62302 ] 00:14:55.972 [2024-05-15 09:08:08.355133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.231 [2024-05-15 09:08:08.462490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.489  Copying: 32/32 [B] (average 31 kBps) 00:14:56.489 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ d2t6p3pp4znbxbkmw3f51t0z82ff7em1wsd849tdb6161mp6pdmsm8vtmx9ge9y0 == \d\2\t\6\p\3\p\p\4\z\n\b\x\b\k\m\w\3\f\5\1\t\0\z\8\2\f\f\7\e\m\1\w\s\d\8\4\9\t\d\b\6\1\6\1\m\p\6\p\d\m\s\m\8\v\t\m\x\9\g\e\9\y\0 ]] 00:14:56.489 00:14:56.489 real 0m0.608s 00:14:56.489 user 0m0.364s 00:14:56.489 sys 0m0.245s 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:14:56.489 ************************************ 00:14:56.489 END TEST dd_flag_append 00:14:56.489 ************************************ 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:56.489 ************************************ 00:14:56.489 START TEST dd_flag_directory 00:14:56.489 ************************************ 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # directory 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # local es=0 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:56.489 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:56.490 09:08:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:56.490 [2024-05-15 09:08:08.872656] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:56.490 [2024-05-15 09:08:08.873751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62330 ] 00:14:56.748 [2024-05-15 09:08:09.019738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.748 [2024-05-15 09:08:09.139513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.005 [2024-05-15 09:08:09.223455] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:57.005 [2024-05-15 09:08:09.223825] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:57.005 [2024-05-15 09:08:09.223934] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:57.005 [2024-05-15 09:08:09.326309] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # es=236 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # es=108 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # case "$es" in 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@669 -- # es=1 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:14:57.005 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # local es=0 00:14:57.006 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:14:57.006 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.006 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.006 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.263 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.263 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.263 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.264 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.264 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:57.264 09:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:14:57.264 [2024-05-15 09:08:09.503562] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:57.264 [2024-05-15 09:08:09.503934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62340 ] 00:14:57.264 [2024-05-15 09:08:09.646737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.521 [2024-05-15 09:08:09.758360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.522 [2024-05-15 09:08:09.834607] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:57.522 [2024-05-15 09:08:09.834986] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:57.522 [2024-05-15 09:08:09.835087] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:57.522 [2024-05-15 09:08:09.935419] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # es=236 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # es=108 00:14:57.780 ************************************ 00:14:57.780 END TEST dd_flag_directory 00:14:57.780 ************************************ 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # case "$es" in 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@669 -- # es=1 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:57.780 00:14:57.780 real 0m1.252s 00:14:57.780 user 0m0.749s 00:14:57.780 sys 0m0.281s 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 ************************************ 00:14:57.780 START TEST dd_flag_nofollow 00:14:57.780 ************************************ 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # nofollow 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # local es=0 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:57.780 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:57.780 [2024-05-15 09:08:10.182445] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:57.780 [2024-05-15 09:08:10.182809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62374 ] 00:14:58.037 [2024-05-15 09:08:10.328823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.037 [2024-05-15 09:08:10.437677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.328 [2024-05-15 09:08:10.512372] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:14:58.328 [2024-05-15 09:08:10.512706] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:14:58.328 [2024-05-15 09:08:10.512820] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:58.328 [2024-05-15 09:08:10.611882] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # es=216 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # es=88 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # case "$es" in 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@669 -- # es=1 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # local es=0 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:58.328 09:08:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:14:58.586 [2024-05-15 09:08:10.784811] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:58.586 [2024-05-15 09:08:10.785656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62383 ] 00:14:58.586 [2024-05-15 09:08:10.923730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.586 [2024-05-15 09:08:11.033074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.843 [2024-05-15 09:08:11.107676] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:14:58.843 [2024-05-15 09:08:11.107978] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:14:58.843 [2024-05-15 09:08:11.108092] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:58.843 [2024-05-15 09:08:11.208178] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # es=216 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # es=88 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # case "$es" in 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@669 -- # es=1 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:59.100 [2024-05-15 09:08:11.376196] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:14:59.100 [2024-05-15 09:08:11.376517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:14:59.100 [2024-05-15 09:08:11.514459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.358 [2024-05-15 09:08:11.620297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.616  Copying: 512/512 [B] (average 500 kBps) 00:14:59.616 00:14:59.616 ************************************ 00:14:59.616 END TEST dd_flag_nofollow 00:14:59.616 ************************************ 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ rlpcl5cj6qmk1kl1vstqo3dd49uudwz9vefm78jnquzg06vfww0dml41yq41fomixrcz81slf30065rmyhbsk3st1po02bql5td2h7m8nkpf0k9kukpo73qzzxngp8bot4qe1a9dabj3bloyizk56gely6ijwk0s3ld83oaay50kviqz02i93hv40jy8m5l4frp3ngrb27a3vofxpspxd4c1awwea2ooeypf1ehajt37nzti9cg7x1ixsxiq22sts9efcsqu1j3cb3mji38dw9om0qhzeaph9e6ll063rjj2e6hb2yaepcbdn49714zsai229g95xvo42081w36t3wnyez8jrqldjzh99llss7tig6kig4xcvecjylp4xp0do25dkg2zeuf53gf49byzn4tfu8iq3kum8mstvrpkbt8g1nj50q0xe5m8wxampv6obd7hjxk91dpxevmkfbyls9qfd1qirf05b78s39s0few4bpglm64y4nc7brhm6aef == \r\l\p\c\l\5\c\j\6\q\m\k\1\k\l\1\v\s\t\q\o\3\d\d\4\9\u\u\d\w\z\9\v\e\f\m\7\8\j\n\q\u\z\g\0\6\v\f\w\w\0\d\m\l\4\1\y\q\4\1\f\o\m\i\x\r\c\z\8\1\s\l\f\3\0\0\6\5\r\m\y\h\b\s\k\3\s\t\1\p\o\0\2\b\q\l\5\t\d\2\h\7\m\8\n\k\p\f\0\k\9\k\u\k\p\o\7\3\q\z\z\x\n\g\p\8\b\o\t\4\q\e\1\a\9\d\a\b\j\3\b\l\o\y\i\z\k\5\6\g\e\l\y\6\i\j\w\k\0\s\3\l\d\8\3\o\a\a\y\5\0\k\v\i\q\z\0\2\i\9\3\h\v\4\0\j\y\8\m\5\l\4\f\r\p\3\n\g\r\b\2\7\a\3\v\o\f\x\p\s\p\x\d\4\c\1\a\w\w\e\a\2\o\o\e\y\p\f\1\e\h\a\j\t\3\7\n\z\t\i\9\c\g\7\x\1\i\x\s\x\i\q\2\2\s\t\s\9\e\f\c\s\q\u\1\j\3\c\b\3\m\j\i\3\8\d\w\9\o\m\0\q\h\z\e\a\p\h\9\e\6\l\l\0\6\3\r\j\j\2\e\6\h\b\2\y\a\e\p\c\b\d\n\4\9\7\1\4\z\s\a\i\2\2\9\g\9\5\x\v\o\4\2\0\8\1\w\3\6\t\3\w\n\y\e\z\8\j\r\q\l\d\j\z\h\9\9\l\l\s\s\7\t\i\g\6\k\i\g\4\x\c\v\e\c\j\y\l\p\4\x\p\0\d\o\2\5\d\k\g\2\z\e\u\f\5\3\g\f\4\9\b\y\z\n\4\t\f\u\8\i\q\3\k\u\m\8\m\s\t\v\r\p\k\b\t\8\g\1\n\j\5\0\q\0\x\e\5\m\8\w\x\a\m\p\v\6\o\b\d\7\h\j\x\k\9\1\d\p\x\e\v\m\k\f\b\y\l\s\9\q\f\d\1\q\i\r\f\0\5\b\7\8\s\3\9\s\0\f\e\w\4\b\p\g\l\m\6\4\y\4\n\c\7\b\r\h\m\6\a\e\f ]] 00:14:59.616 00:14:59.616 real 0m1.796s 00:14:59.616 user 0m1.075s 00:14:59.616 sys 0m0.505s 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:59.616 ************************************ 00:14:59.616 START TEST dd_flag_noatime 00:14:59.616 ************************************ 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # noatime 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1715764091 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1715764091 00:14:59.616 09:08:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:15:00.987 09:08:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:00.987 [2024-05-15 09:08:13.043076] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:00.987 [2024-05-15 09:08:13.043308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62433 ] 00:15:00.987 [2024-05-15 09:08:13.183822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.987 [2024-05-15 09:08:13.299906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.246  Copying: 512/512 [B] (average 500 kBps) 00:15:01.246 00:15:01.246 09:08:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:01.246 09:08:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1715764091 )) 00:15:01.246 09:08:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:01.246 09:08:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1715764091 )) 00:15:01.246 09:08:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:01.246 [2024-05-15 09:08:13.672462] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:01.246 [2024-05-15 09:08:13.672809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62447 ] 00:15:01.505 [2024-05-15 09:08:13.818819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.505 [2024-05-15 09:08:13.936540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.081  Copying: 512/512 [B] (average 500 kBps) 00:15:02.081 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:02.081 ************************************ 00:15:02.081 END TEST dd_flag_noatime 00:15:02.081 ************************************ 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1715764094 )) 00:15:02.081 00:15:02.081 real 0m2.267s 00:15:02.081 user 0m0.757s 00:15:02.081 sys 0m0.510s 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 ************************************ 00:15:02.081 START TEST dd_flags_misc 00:15:02.081 ************************************ 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # io 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:02.081 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:02.081 [2024-05-15 09:08:14.353128] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:02.081 [2024-05-15 09:08:14.353510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62481 ] 00:15:02.081 [2024-05-15 09:08:14.494767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.339 [2024-05-15 09:08:14.601724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.599  Copying: 512/512 [B] (average 500 kBps) 00:15:02.599 00:15:02.599 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lj4za6rw3a96ilgw7ytdpvryf8mh55rxika1qkmz3723fj89l155u59ysmk5im3l0vd9bcdymgas330pueyp5n95tkqidiz8w8mlfmvp1kd2y4wepdkdlljpsbf6vh1ny5il9ljchekbs5phfghxpqyuoqj1yjj2wksuis8a9csnmxwijb07mlrubu3bein8sqih7ytxdsq0f31cfu23x8ph18ir81hd3eaien6lg46ijuf8m2ymabcuoe5ypdcdstk3izxjjzyrbqj2p4ne27yc11r2d4h76bzkvz2ui62pq0jdja7b26wesp5cx2hcnjkx75bdqebsm5t0pd50sxlh92tvet1irqygwmj22dr349gz9ytth4xljd8boom7kr9lifiy0s341qjfi5b0nrakqjmw5kn0b3bjty91dv5ts1my8vi6f9so1113m0shwx96zx31ye60zgm2lk19nmberhek1tukktbj0gvnrdcaycx0jz6g3bytqfsfg5ea == \l\j\4\z\a\6\r\w\3\a\9\6\i\l\g\w\7\y\t\d\p\v\r\y\f\8\m\h\5\5\r\x\i\k\a\1\q\k\m\z\3\7\2\3\f\j\8\9\l\1\5\5\u\5\9\y\s\m\k\5\i\m\3\l\0\v\d\9\b\c\d\y\m\g\a\s\3\3\0\p\u\e\y\p\5\n\9\5\t\k\q\i\d\i\z\8\w\8\m\l\f\m\v\p\1\k\d\2\y\4\w\e\p\d\k\d\l\l\j\p\s\b\f\6\v\h\1\n\y\5\i\l\9\l\j\c\h\e\k\b\s\5\p\h\f\g\h\x\p\q\y\u\o\q\j\1\y\j\j\2\w\k\s\u\i\s\8\a\9\c\s\n\m\x\w\i\j\b\0\7\m\l\r\u\b\u\3\b\e\i\n\8\s\q\i\h\7\y\t\x\d\s\q\0\f\3\1\c\f\u\2\3\x\8\p\h\1\8\i\r\8\1\h\d\3\e\a\i\e\n\6\l\g\4\6\i\j\u\f\8\m\2\y\m\a\b\c\u\o\e\5\y\p\d\c\d\s\t\k\3\i\z\x\j\j\z\y\r\b\q\j\2\p\4\n\e\2\7\y\c\1\1\r\2\d\4\h\7\6\b\z\k\v\z\2\u\i\6\2\p\q\0\j\d\j\a\7\b\2\6\w\e\s\p\5\c\x\2\h\c\n\j\k\x\7\5\b\d\q\e\b\s\m\5\t\0\p\d\5\0\s\x\l\h\9\2\t\v\e\t\1\i\r\q\y\g\w\m\j\2\2\d\r\3\4\9\g\z\9\y\t\t\h\4\x\l\j\d\8\b\o\o\m\7\k\r\9\l\i\f\i\y\0\s\3\4\1\q\j\f\i\5\b\0\n\r\a\k\q\j\m\w\5\k\n\0\b\3\b\j\t\y\9\1\d\v\5\t\s\1\m\y\8\v\i\6\f\9\s\o\1\1\1\3\m\0\s\h\w\x\9\6\z\x\3\1\y\e\6\0\z\g\m\2\l\k\1\9\n\m\b\e\r\h\e\k\1\t\u\k\k\t\b\j\0\g\v\n\r\d\c\a\y\c\x\0\j\z\6\g\3\b\y\t\q\f\s\f\g\5\e\a ]] 00:15:02.599 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:02.599 09:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:02.599 [2024-05-15 09:08:14.943031] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:02.599 [2024-05-15 09:08:14.943842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62489 ] 00:15:02.857 [2024-05-15 09:08:15.087222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.857 [2024-05-15 09:08:15.211645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.115  Copying: 512/512 [B] (average 500 kBps) 00:15:03.115 00:15:03.115 09:08:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lj4za6rw3a96ilgw7ytdpvryf8mh55rxika1qkmz3723fj89l155u59ysmk5im3l0vd9bcdymgas330pueyp5n95tkqidiz8w8mlfmvp1kd2y4wepdkdlljpsbf6vh1ny5il9ljchekbs5phfghxpqyuoqj1yjj2wksuis8a9csnmxwijb07mlrubu3bein8sqih7ytxdsq0f31cfu23x8ph18ir81hd3eaien6lg46ijuf8m2ymabcuoe5ypdcdstk3izxjjzyrbqj2p4ne27yc11r2d4h76bzkvz2ui62pq0jdja7b26wesp5cx2hcnjkx75bdqebsm5t0pd50sxlh92tvet1irqygwmj22dr349gz9ytth4xljd8boom7kr9lifiy0s341qjfi5b0nrakqjmw5kn0b3bjty91dv5ts1my8vi6f9so1113m0shwx96zx31ye60zgm2lk19nmberhek1tukktbj0gvnrdcaycx0jz6g3bytqfsfg5ea == \l\j\4\z\a\6\r\w\3\a\9\6\i\l\g\w\7\y\t\d\p\v\r\y\f\8\m\h\5\5\r\x\i\k\a\1\q\k\m\z\3\7\2\3\f\j\8\9\l\1\5\5\u\5\9\y\s\m\k\5\i\m\3\l\0\v\d\9\b\c\d\y\m\g\a\s\3\3\0\p\u\e\y\p\5\n\9\5\t\k\q\i\d\i\z\8\w\8\m\l\f\m\v\p\1\k\d\2\y\4\w\e\p\d\k\d\l\l\j\p\s\b\f\6\v\h\1\n\y\5\i\l\9\l\j\c\h\e\k\b\s\5\p\h\f\g\h\x\p\q\y\u\o\q\j\1\y\j\j\2\w\k\s\u\i\s\8\a\9\c\s\n\m\x\w\i\j\b\0\7\m\l\r\u\b\u\3\b\e\i\n\8\s\q\i\h\7\y\t\x\d\s\q\0\f\3\1\c\f\u\2\3\x\8\p\h\1\8\i\r\8\1\h\d\3\e\a\i\e\n\6\l\g\4\6\i\j\u\f\8\m\2\y\m\a\b\c\u\o\e\5\y\p\d\c\d\s\t\k\3\i\z\x\j\j\z\y\r\b\q\j\2\p\4\n\e\2\7\y\c\1\1\r\2\d\4\h\7\6\b\z\k\v\z\2\u\i\6\2\p\q\0\j\d\j\a\7\b\2\6\w\e\s\p\5\c\x\2\h\c\n\j\k\x\7\5\b\d\q\e\b\s\m\5\t\0\p\d\5\0\s\x\l\h\9\2\t\v\e\t\1\i\r\q\y\g\w\m\j\2\2\d\r\3\4\9\g\z\9\y\t\t\h\4\x\l\j\d\8\b\o\o\m\7\k\r\9\l\i\f\i\y\0\s\3\4\1\q\j\f\i\5\b\0\n\r\a\k\q\j\m\w\5\k\n\0\b\3\b\j\t\y\9\1\d\v\5\t\s\1\m\y\8\v\i\6\f\9\s\o\1\1\1\3\m\0\s\h\w\x\9\6\z\x\3\1\y\e\6\0\z\g\m\2\l\k\1\9\n\m\b\e\r\h\e\k\1\t\u\k\k\t\b\j\0\g\v\n\r\d\c\a\y\c\x\0\j\z\6\g\3\b\y\t\q\f\s\f\g\5\e\a ]] 00:15:03.115 09:08:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:03.115 09:08:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:03.115 [2024-05-15 09:08:15.560118] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:03.115 [2024-05-15 09:08:15.560521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62500 ] 00:15:03.373 [2024-05-15 09:08:15.708755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.631 [2024-05-15 09:08:15.844867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.890  Copying: 512/512 [B] (average 125 kBps) 00:15:03.890 00:15:03.890 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lj4za6rw3a96ilgw7ytdpvryf8mh55rxika1qkmz3723fj89l155u59ysmk5im3l0vd9bcdymgas330pueyp5n95tkqidiz8w8mlfmvp1kd2y4wepdkdlljpsbf6vh1ny5il9ljchekbs5phfghxpqyuoqj1yjj2wksuis8a9csnmxwijb07mlrubu3bein8sqih7ytxdsq0f31cfu23x8ph18ir81hd3eaien6lg46ijuf8m2ymabcuoe5ypdcdstk3izxjjzyrbqj2p4ne27yc11r2d4h76bzkvz2ui62pq0jdja7b26wesp5cx2hcnjkx75bdqebsm5t0pd50sxlh92tvet1irqygwmj22dr349gz9ytth4xljd8boom7kr9lifiy0s341qjfi5b0nrakqjmw5kn0b3bjty91dv5ts1my8vi6f9so1113m0shwx96zx31ye60zgm2lk19nmberhek1tukktbj0gvnrdcaycx0jz6g3bytqfsfg5ea == \l\j\4\z\a\6\r\w\3\a\9\6\i\l\g\w\7\y\t\d\p\v\r\y\f\8\m\h\5\5\r\x\i\k\a\1\q\k\m\z\3\7\2\3\f\j\8\9\l\1\5\5\u\5\9\y\s\m\k\5\i\m\3\l\0\v\d\9\b\c\d\y\m\g\a\s\3\3\0\p\u\e\y\p\5\n\9\5\t\k\q\i\d\i\z\8\w\8\m\l\f\m\v\p\1\k\d\2\y\4\w\e\p\d\k\d\l\l\j\p\s\b\f\6\v\h\1\n\y\5\i\l\9\l\j\c\h\e\k\b\s\5\p\h\f\g\h\x\p\q\y\u\o\q\j\1\y\j\j\2\w\k\s\u\i\s\8\a\9\c\s\n\m\x\w\i\j\b\0\7\m\l\r\u\b\u\3\b\e\i\n\8\s\q\i\h\7\y\t\x\d\s\q\0\f\3\1\c\f\u\2\3\x\8\p\h\1\8\i\r\8\1\h\d\3\e\a\i\e\n\6\l\g\4\6\i\j\u\f\8\m\2\y\m\a\b\c\u\o\e\5\y\p\d\c\d\s\t\k\3\i\z\x\j\j\z\y\r\b\q\j\2\p\4\n\e\2\7\y\c\1\1\r\2\d\4\h\7\6\b\z\k\v\z\2\u\i\6\2\p\q\0\j\d\j\a\7\b\2\6\w\e\s\p\5\c\x\2\h\c\n\j\k\x\7\5\b\d\q\e\b\s\m\5\t\0\p\d\5\0\s\x\l\h\9\2\t\v\e\t\1\i\r\q\y\g\w\m\j\2\2\d\r\3\4\9\g\z\9\y\t\t\h\4\x\l\j\d\8\b\o\o\m\7\k\r\9\l\i\f\i\y\0\s\3\4\1\q\j\f\i\5\b\0\n\r\a\k\q\j\m\w\5\k\n\0\b\3\b\j\t\y\9\1\d\v\5\t\s\1\m\y\8\v\i\6\f\9\s\o\1\1\1\3\m\0\s\h\w\x\9\6\z\x\3\1\y\e\6\0\z\g\m\2\l\k\1\9\n\m\b\e\r\h\e\k\1\t\u\k\k\t\b\j\0\g\v\n\r\d\c\a\y\c\x\0\j\z\6\g\3\b\y\t\q\f\s\f\g\5\e\a ]] 00:15:03.890 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:03.890 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:03.890 [2024-05-15 09:08:16.202625] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:03.890 [2024-05-15 09:08:16.202947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:15:04.148 [2024-05-15 09:08:16.345373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.148 [2024-05-15 09:08:16.446702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.464  Copying: 512/512 [B] (average 166 kBps) 00:15:04.464 00:15:04.464 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lj4za6rw3a96ilgw7ytdpvryf8mh55rxika1qkmz3723fj89l155u59ysmk5im3l0vd9bcdymgas330pueyp5n95tkqidiz8w8mlfmvp1kd2y4wepdkdlljpsbf6vh1ny5il9ljchekbs5phfghxpqyuoqj1yjj2wksuis8a9csnmxwijb07mlrubu3bein8sqih7ytxdsq0f31cfu23x8ph18ir81hd3eaien6lg46ijuf8m2ymabcuoe5ypdcdstk3izxjjzyrbqj2p4ne27yc11r2d4h76bzkvz2ui62pq0jdja7b26wesp5cx2hcnjkx75bdqebsm5t0pd50sxlh92tvet1irqygwmj22dr349gz9ytth4xljd8boom7kr9lifiy0s341qjfi5b0nrakqjmw5kn0b3bjty91dv5ts1my8vi6f9so1113m0shwx96zx31ye60zgm2lk19nmberhek1tukktbj0gvnrdcaycx0jz6g3bytqfsfg5ea == \l\j\4\z\a\6\r\w\3\a\9\6\i\l\g\w\7\y\t\d\p\v\r\y\f\8\m\h\5\5\r\x\i\k\a\1\q\k\m\z\3\7\2\3\f\j\8\9\l\1\5\5\u\5\9\y\s\m\k\5\i\m\3\l\0\v\d\9\b\c\d\y\m\g\a\s\3\3\0\p\u\e\y\p\5\n\9\5\t\k\q\i\d\i\z\8\w\8\m\l\f\m\v\p\1\k\d\2\y\4\w\e\p\d\k\d\l\l\j\p\s\b\f\6\v\h\1\n\y\5\i\l\9\l\j\c\h\e\k\b\s\5\p\h\f\g\h\x\p\q\y\u\o\q\j\1\y\j\j\2\w\k\s\u\i\s\8\a\9\c\s\n\m\x\w\i\j\b\0\7\m\l\r\u\b\u\3\b\e\i\n\8\s\q\i\h\7\y\t\x\d\s\q\0\f\3\1\c\f\u\2\3\x\8\p\h\1\8\i\r\8\1\h\d\3\e\a\i\e\n\6\l\g\4\6\i\j\u\f\8\m\2\y\m\a\b\c\u\o\e\5\y\p\d\c\d\s\t\k\3\i\z\x\j\j\z\y\r\b\q\j\2\p\4\n\e\2\7\y\c\1\1\r\2\d\4\h\7\6\b\z\k\v\z\2\u\i\6\2\p\q\0\j\d\j\a\7\b\2\6\w\e\s\p\5\c\x\2\h\c\n\j\k\x\7\5\b\d\q\e\b\s\m\5\t\0\p\d\5\0\s\x\l\h\9\2\t\v\e\t\1\i\r\q\y\g\w\m\j\2\2\d\r\3\4\9\g\z\9\y\t\t\h\4\x\l\j\d\8\b\o\o\m\7\k\r\9\l\i\f\i\y\0\s\3\4\1\q\j\f\i\5\b\0\n\r\a\k\q\j\m\w\5\k\n\0\b\3\b\j\t\y\9\1\d\v\5\t\s\1\m\y\8\v\i\6\f\9\s\o\1\1\1\3\m\0\s\h\w\x\9\6\z\x\3\1\y\e\6\0\z\g\m\2\l\k\1\9\n\m\b\e\r\h\e\k\1\t\u\k\k\t\b\j\0\g\v\n\r\d\c\a\y\c\x\0\j\z\6\g\3\b\y\t\q\f\s\f\g\5\e\a ]] 00:15:04.464 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:04.464 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:15:04.464 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:15:04.464 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:04.465 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:04.465 09:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:04.465 [2024-05-15 09:08:16.795273] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:04.465 [2024-05-15 09:08:16.795625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62519 ] 00:15:04.724 [2024-05-15 09:08:16.938515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.724 [2024-05-15 09:08:17.043949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.982  Copying: 512/512 [B] (average 500 kBps) 00:15:04.983 00:15:04.983 09:08:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kuzna3636fjhzsn8rgfmk5aoarrgkowtyhfljbmbxj8mwudzvn3p19g0jarqw9l8m8oe3tx7a4v0t2naptfmb716ol0yxergmsqlhpi9bsiopltjjjojxjx87bfhh1t8xqfdvdpubhz1om1u2p7nmcsmjivsymqjhe8pqdg56rg5nnavy4mefy4v6h61ibhpoafe4h7gffxa86opde5ahr89d6v5jqa54xauvo243c3y0f26cu5jwjfwcja0w95s5q2bm6ysdd4utpa5iz4wv233itduja071o14u0nfe4mpsqxhimuyzchl8llnnus40d8qeks0debywxbngdp2tmarqjbjp2ddj2uwjofsk8fq8s18z7akc4csost4p7i2dsrbrrr89i7h41hz9rtklf7arwsusyf0sn13wkdpjxxujz9jh0fzlzy3lfrxy23eaguw8bnypoqitgd559171zgp9kxqcu05ff0szp2xqvsx5j4ni7kgy8977oezjg8s == \k\u\z\n\a\3\6\3\6\f\j\h\z\s\n\8\r\g\f\m\k\5\a\o\a\r\r\g\k\o\w\t\y\h\f\l\j\b\m\b\x\j\8\m\w\u\d\z\v\n\3\p\1\9\g\0\j\a\r\q\w\9\l\8\m\8\o\e\3\t\x\7\a\4\v\0\t\2\n\a\p\t\f\m\b\7\1\6\o\l\0\y\x\e\r\g\m\s\q\l\h\p\i\9\b\s\i\o\p\l\t\j\j\j\o\j\x\j\x\8\7\b\f\h\h\1\t\8\x\q\f\d\v\d\p\u\b\h\z\1\o\m\1\u\2\p\7\n\m\c\s\m\j\i\v\s\y\m\q\j\h\e\8\p\q\d\g\5\6\r\g\5\n\n\a\v\y\4\m\e\f\y\4\v\6\h\6\1\i\b\h\p\o\a\f\e\4\h\7\g\f\f\x\a\8\6\o\p\d\e\5\a\h\r\8\9\d\6\v\5\j\q\a\5\4\x\a\u\v\o\2\4\3\c\3\y\0\f\2\6\c\u\5\j\w\j\f\w\c\j\a\0\w\9\5\s\5\q\2\b\m\6\y\s\d\d\4\u\t\p\a\5\i\z\4\w\v\2\3\3\i\t\d\u\j\a\0\7\1\o\1\4\u\0\n\f\e\4\m\p\s\q\x\h\i\m\u\y\z\c\h\l\8\l\l\n\n\u\s\4\0\d\8\q\e\k\s\0\d\e\b\y\w\x\b\n\g\d\p\2\t\m\a\r\q\j\b\j\p\2\d\d\j\2\u\w\j\o\f\s\k\8\f\q\8\s\1\8\z\7\a\k\c\4\c\s\o\s\t\4\p\7\i\2\d\s\r\b\r\r\r\8\9\i\7\h\4\1\h\z\9\r\t\k\l\f\7\a\r\w\s\u\s\y\f\0\s\n\1\3\w\k\d\p\j\x\x\u\j\z\9\j\h\0\f\z\l\z\y\3\l\f\r\x\y\2\3\e\a\g\u\w\8\b\n\y\p\o\q\i\t\g\d\5\5\9\1\7\1\z\g\p\9\k\x\q\c\u\0\5\f\f\0\s\z\p\2\x\q\v\s\x\5\j\4\n\i\7\k\g\y\8\9\7\7\o\e\z\j\g\8\s ]] 00:15:04.983 09:08:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:04.983 09:08:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:04.983 [2024-05-15 09:08:17.388123] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:04.983 [2024-05-15 09:08:17.388461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62534 ] 00:15:05.242 [2024-05-15 09:08:17.534587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.242 [2024-05-15 09:08:17.646830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.501  Copying: 512/512 [B] (average 500 kBps) 00:15:05.501 00:15:05.760 09:08:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kuzna3636fjhzsn8rgfmk5aoarrgkowtyhfljbmbxj8mwudzvn3p19g0jarqw9l8m8oe3tx7a4v0t2naptfmb716ol0yxergmsqlhpi9bsiopltjjjojxjx87bfhh1t8xqfdvdpubhz1om1u2p7nmcsmjivsymqjhe8pqdg56rg5nnavy4mefy4v6h61ibhpoafe4h7gffxa86opde5ahr89d6v5jqa54xauvo243c3y0f26cu5jwjfwcja0w95s5q2bm6ysdd4utpa5iz4wv233itduja071o14u0nfe4mpsqxhimuyzchl8llnnus40d8qeks0debywxbngdp2tmarqjbjp2ddj2uwjofsk8fq8s18z7akc4csost4p7i2dsrbrrr89i7h41hz9rtklf7arwsusyf0sn13wkdpjxxujz9jh0fzlzy3lfrxy23eaguw8bnypoqitgd559171zgp9kxqcu05ff0szp2xqvsx5j4ni7kgy8977oezjg8s == \k\u\z\n\a\3\6\3\6\f\j\h\z\s\n\8\r\g\f\m\k\5\a\o\a\r\r\g\k\o\w\t\y\h\f\l\j\b\m\b\x\j\8\m\w\u\d\z\v\n\3\p\1\9\g\0\j\a\r\q\w\9\l\8\m\8\o\e\3\t\x\7\a\4\v\0\t\2\n\a\p\t\f\m\b\7\1\6\o\l\0\y\x\e\r\g\m\s\q\l\h\p\i\9\b\s\i\o\p\l\t\j\j\j\o\j\x\j\x\8\7\b\f\h\h\1\t\8\x\q\f\d\v\d\p\u\b\h\z\1\o\m\1\u\2\p\7\n\m\c\s\m\j\i\v\s\y\m\q\j\h\e\8\p\q\d\g\5\6\r\g\5\n\n\a\v\y\4\m\e\f\y\4\v\6\h\6\1\i\b\h\p\o\a\f\e\4\h\7\g\f\f\x\a\8\6\o\p\d\e\5\a\h\r\8\9\d\6\v\5\j\q\a\5\4\x\a\u\v\o\2\4\3\c\3\y\0\f\2\6\c\u\5\j\w\j\f\w\c\j\a\0\w\9\5\s\5\q\2\b\m\6\y\s\d\d\4\u\t\p\a\5\i\z\4\w\v\2\3\3\i\t\d\u\j\a\0\7\1\o\1\4\u\0\n\f\e\4\m\p\s\q\x\h\i\m\u\y\z\c\h\l\8\l\l\n\n\u\s\4\0\d\8\q\e\k\s\0\d\e\b\y\w\x\b\n\g\d\p\2\t\m\a\r\q\j\b\j\p\2\d\d\j\2\u\w\j\o\f\s\k\8\f\q\8\s\1\8\z\7\a\k\c\4\c\s\o\s\t\4\p\7\i\2\d\s\r\b\r\r\r\8\9\i\7\h\4\1\h\z\9\r\t\k\l\f\7\a\r\w\s\u\s\y\f\0\s\n\1\3\w\k\d\p\j\x\x\u\j\z\9\j\h\0\f\z\l\z\y\3\l\f\r\x\y\2\3\e\a\g\u\w\8\b\n\y\p\o\q\i\t\g\d\5\5\9\1\7\1\z\g\p\9\k\x\q\c\u\0\5\f\f\0\s\z\p\2\x\q\v\s\x\5\j\4\n\i\7\k\g\y\8\9\7\7\o\e\z\j\g\8\s ]] 00:15:05.760 09:08:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:05.760 09:08:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:05.760 [2024-05-15 09:08:18.003250] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:05.760 [2024-05-15 09:08:18.003614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62538 ] 00:15:05.760 [2024-05-15 09:08:18.149005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.019 [2024-05-15 09:08:18.250175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.277  Copying: 512/512 [B] (average 250 kBps) 00:15:06.277 00:15:06.277 09:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kuzna3636fjhzsn8rgfmk5aoarrgkowtyhfljbmbxj8mwudzvn3p19g0jarqw9l8m8oe3tx7a4v0t2naptfmb716ol0yxergmsqlhpi9bsiopltjjjojxjx87bfhh1t8xqfdvdpubhz1om1u2p7nmcsmjivsymqjhe8pqdg56rg5nnavy4mefy4v6h61ibhpoafe4h7gffxa86opde5ahr89d6v5jqa54xauvo243c3y0f26cu5jwjfwcja0w95s5q2bm6ysdd4utpa5iz4wv233itduja071o14u0nfe4mpsqxhimuyzchl8llnnus40d8qeks0debywxbngdp2tmarqjbjp2ddj2uwjofsk8fq8s18z7akc4csost4p7i2dsrbrrr89i7h41hz9rtklf7arwsusyf0sn13wkdpjxxujz9jh0fzlzy3lfrxy23eaguw8bnypoqitgd559171zgp9kxqcu05ff0szp2xqvsx5j4ni7kgy8977oezjg8s == \k\u\z\n\a\3\6\3\6\f\j\h\z\s\n\8\r\g\f\m\k\5\a\o\a\r\r\g\k\o\w\t\y\h\f\l\j\b\m\b\x\j\8\m\w\u\d\z\v\n\3\p\1\9\g\0\j\a\r\q\w\9\l\8\m\8\o\e\3\t\x\7\a\4\v\0\t\2\n\a\p\t\f\m\b\7\1\6\o\l\0\y\x\e\r\g\m\s\q\l\h\p\i\9\b\s\i\o\p\l\t\j\j\j\o\j\x\j\x\8\7\b\f\h\h\1\t\8\x\q\f\d\v\d\p\u\b\h\z\1\o\m\1\u\2\p\7\n\m\c\s\m\j\i\v\s\y\m\q\j\h\e\8\p\q\d\g\5\6\r\g\5\n\n\a\v\y\4\m\e\f\y\4\v\6\h\6\1\i\b\h\p\o\a\f\e\4\h\7\g\f\f\x\a\8\6\o\p\d\e\5\a\h\r\8\9\d\6\v\5\j\q\a\5\4\x\a\u\v\o\2\4\3\c\3\y\0\f\2\6\c\u\5\j\w\j\f\w\c\j\a\0\w\9\5\s\5\q\2\b\m\6\y\s\d\d\4\u\t\p\a\5\i\z\4\w\v\2\3\3\i\t\d\u\j\a\0\7\1\o\1\4\u\0\n\f\e\4\m\p\s\q\x\h\i\m\u\y\z\c\h\l\8\l\l\n\n\u\s\4\0\d\8\q\e\k\s\0\d\e\b\y\w\x\b\n\g\d\p\2\t\m\a\r\q\j\b\j\p\2\d\d\j\2\u\w\j\o\f\s\k\8\f\q\8\s\1\8\z\7\a\k\c\4\c\s\o\s\t\4\p\7\i\2\d\s\r\b\r\r\r\8\9\i\7\h\4\1\h\z\9\r\t\k\l\f\7\a\r\w\s\u\s\y\f\0\s\n\1\3\w\k\d\p\j\x\x\u\j\z\9\j\h\0\f\z\l\z\y\3\l\f\r\x\y\2\3\e\a\g\u\w\8\b\n\y\p\o\q\i\t\g\d\5\5\9\1\7\1\z\g\p\9\k\x\q\c\u\0\5\f\f\0\s\z\p\2\x\q\v\s\x\5\j\4\n\i\7\k\g\y\8\9\7\7\o\e\z\j\g\8\s ]] 00:15:06.277 09:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:06.277 09:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:06.277 [2024-05-15 09:08:18.593443] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:06.277 [2024-05-15 09:08:18.593819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62553 ] 00:15:06.537 [2024-05-15 09:08:18.734884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.537 [2024-05-15 09:08:18.838674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.795  Copying: 512/512 [B] (average 250 kBps) 00:15:06.795 00:15:06.795 ************************************ 00:15:06.795 END TEST dd_flags_misc 00:15:06.795 09:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kuzna3636fjhzsn8rgfmk5aoarrgkowtyhfljbmbxj8mwudzvn3p19g0jarqw9l8m8oe3tx7a4v0t2naptfmb716ol0yxergmsqlhpi9bsiopltjjjojxjx87bfhh1t8xqfdvdpubhz1om1u2p7nmcsmjivsymqjhe8pqdg56rg5nnavy4mefy4v6h61ibhpoafe4h7gffxa86opde5ahr89d6v5jqa54xauvo243c3y0f26cu5jwjfwcja0w95s5q2bm6ysdd4utpa5iz4wv233itduja071o14u0nfe4mpsqxhimuyzchl8llnnus40d8qeks0debywxbngdp2tmarqjbjp2ddj2uwjofsk8fq8s18z7akc4csost4p7i2dsrbrrr89i7h41hz9rtklf7arwsusyf0sn13wkdpjxxujz9jh0fzlzy3lfrxy23eaguw8bnypoqitgd559171zgp9kxqcu05ff0szp2xqvsx5j4ni7kgy8977oezjg8s == \k\u\z\n\a\3\6\3\6\f\j\h\z\s\n\8\r\g\f\m\k\5\a\o\a\r\r\g\k\o\w\t\y\h\f\l\j\b\m\b\x\j\8\m\w\u\d\z\v\n\3\p\1\9\g\0\j\a\r\q\w\9\l\8\m\8\o\e\3\t\x\7\a\4\v\0\t\2\n\a\p\t\f\m\b\7\1\6\o\l\0\y\x\e\r\g\m\s\q\l\h\p\i\9\b\s\i\o\p\l\t\j\j\j\o\j\x\j\x\8\7\b\f\h\h\1\t\8\x\q\f\d\v\d\p\u\b\h\z\1\o\m\1\u\2\p\7\n\m\c\s\m\j\i\v\s\y\m\q\j\h\e\8\p\q\d\g\5\6\r\g\5\n\n\a\v\y\4\m\e\f\y\4\v\6\h\6\1\i\b\h\p\o\a\f\e\4\h\7\g\f\f\x\a\8\6\o\p\d\e\5\a\h\r\8\9\d\6\v\5\j\q\a\5\4\x\a\u\v\o\2\4\3\c\3\y\0\f\2\6\c\u\5\j\w\j\f\w\c\j\a\0\w\9\5\s\5\q\2\b\m\6\y\s\d\d\4\u\t\p\a\5\i\z\4\w\v\2\3\3\i\t\d\u\j\a\0\7\1\o\1\4\u\0\n\f\e\4\m\p\s\q\x\h\i\m\u\y\z\c\h\l\8\l\l\n\n\u\s\4\0\d\8\q\e\k\s\0\d\e\b\y\w\x\b\n\g\d\p\2\t\m\a\r\q\j\b\j\p\2\d\d\j\2\u\w\j\o\f\s\k\8\f\q\8\s\1\8\z\7\a\k\c\4\c\s\o\s\t\4\p\7\i\2\d\s\r\b\r\r\r\8\9\i\7\h\4\1\h\z\9\r\t\k\l\f\7\a\r\w\s\u\s\y\f\0\s\n\1\3\w\k\d\p\j\x\x\u\j\z\9\j\h\0\f\z\l\z\y\3\l\f\r\x\y\2\3\e\a\g\u\w\8\b\n\y\p\o\q\i\t\g\d\5\5\9\1\7\1\z\g\p\9\k\x\q\c\u\0\5\f\f\0\s\z\p\2\x\q\v\s\x\5\j\4\n\i\7\k\g\y\8\9\7\7\o\e\z\j\g\8\s ]] 00:15:06.795 00:15:06.795 real 0m4.846s 00:15:06.795 user 0m2.860s 00:15:06.796 sys 0m1.993s 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:06.796 ************************************ 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:15:06.796 * Second test run, disabling liburing, forcing AIO 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:06.796 ************************************ 00:15:06.796 START TEST dd_flag_append_forced_aio 00:15:06.796 ************************************ 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # append 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=m8j0b3c73b41s1xlra006bxief1m001y 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=5yw1v8dgupc02lx0qtbk73y45ajlxlg1 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s m8j0b3c73b41s1xlra006bxief1m001y 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 5yw1v8dgupc02lx0qtbk73y45ajlxlg1 00:15:06.796 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:07.054 [2024-05-15 09:08:19.247445] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:07.054 [2024-05-15 09:08:19.247743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62587 ] 00:15:07.054 [2024-05-15 09:08:19.377391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.312 [2024-05-15 09:08:19.504138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.570  Copying: 32/32 [B] (average 31 kBps) 00:15:07.570 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 5yw1v8dgupc02lx0qtbk73y45ajlxlg1m8j0b3c73b41s1xlra006bxief1m001y == \5\y\w\1\v\8\d\g\u\p\c\0\2\l\x\0\q\t\b\k\7\3\y\4\5\a\j\l\x\l\g\1\m\8\j\0\b\3\c\7\3\b\4\1\s\1\x\l\r\a\0\0\6\b\x\i\e\f\1\m\0\0\1\y ]] 00:15:07.570 00:15:07.570 real 0m0.603s 00:15:07.570 user 0m0.351s 00:15:07.570 sys 0m0.128s 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:07.570 ************************************ 00:15:07.570 END TEST dd_flag_append_forced_aio 00:15:07.570 ************************************ 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:07.570 09:08:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:07.570 ************************************ 00:15:07.571 START TEST dd_flag_directory_forced_aio 00:15:07.571 ************************************ 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # directory 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:07.571 09:08:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:07.571 [2024-05-15 09:08:19.910024] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:07.571 [2024-05-15 09:08:19.910204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62608 ] 00:15:07.829 [2024-05-15 09:08:20.047304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.829 [2024-05-15 09:08:20.162908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.829 [2024-05-15 09:08:20.255892] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:07.829 [2024-05-15 09:08:20.256121] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:07.829 [2024-05-15 09:08:20.256203] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:08.087 [2024-05-15 09:08:20.350881] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # es=236 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # es=108 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:08.087 09:08:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:08.087 [2024-05-15 09:08:20.527817] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:08.087 [2024-05-15 09:08:20.528456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62623 ] 00:15:08.346 [2024-05-15 09:08:20.666343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.605 [2024-05-15 09:08:20.794915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.605 [2024-05-15 09:08:20.865784] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:08.605 [2024-05-15 09:08:20.866074] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:08.605 [2024-05-15 09:08:20.866157] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:08.605 [2024-05-15 09:08:20.961680] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # es=236 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # es=108 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:08.865 00:15:08.865 real 0m1.216s 00:15:08.865 user 0m0.697s 00:15:08.865 sys 0m0.284s 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:08.865 ************************************ 00:15:08.865 END TEST dd_flag_directory_forced_aio 00:15:08.865 ************************************ 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:08.865 ************************************ 00:15:08.865 START TEST dd_flag_nofollow_forced_aio 00:15:08.865 ************************************ 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # nofollow 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:08.865 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:08.865 [2024-05-15 09:08:21.218772] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:08.865 [2024-05-15 09:08:21.219162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62646 ] 00:15:09.125 [2024-05-15 09:08:21.358212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.125 [2024-05-15 09:08:21.466127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.125 [2024-05-15 09:08:21.536864] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:09.125 [2024-05-15 09:08:21.537124] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:09.125 [2024-05-15 09:08:21.537223] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.384 [2024-05-15 09:08:21.632291] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # es=216 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # es=88 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:09.384 09:08:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:09.384 [2024-05-15 09:08:21.800778] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:09.384 [2024-05-15 09:08:21.801004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62661 ] 00:15:09.643 [2024-05-15 09:08:21.937023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.643 [2024-05-15 09:08:22.041963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.902 [2024-05-15 09:08:22.114054] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:09.902 [2024-05-15 09:08:22.114326] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:09.902 [2024-05-15 09:08:22.114432] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.902 [2024-05-15 09:08:22.208795] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # es=216 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # es=88 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:09.902 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:10.161 [2024-05-15 09:08:22.381711] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:10.161 [2024-05-15 09:08:22.381958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:15:10.161 [2024-05-15 09:08:22.521147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.419 [2024-05-15 09:08:22.620295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.678  Copying: 512/512 [B] (average 500 kBps) 00:15:10.678 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ s004kflk26i1oxn6aeco8v0z07nfp56fy9bk2o490uszdl5ljhng8o587h5y4kkdjuzdbuk0s8thyyhqd4dwcv3p525or0qp6jdeb9ovgnf98uvwrszoqfug5egigy9ryiwad8ygt61puf8xv4rh9h88grpqns4knr0xnh9k10hyjebz9j3d5p95lob3gc2llalapil5qrdtsunsupp1qhk46glk38ng7za94ew7whxerc4x1u0mg5ckmp7a857uvmc02xrvzjusupw8x0v59w08kaadmc2cblsvq6cfefx1qjenp9h78qtr9d325c0zjytkfqn8l27uzhwvyppp58mlgaxolvl3ajqmskcd5hkqqvl3vsccoy2mddfxwyhpw2f2bdcna9lnx5xr8wtpbk3jrhpr3bluc56w4nr3kton367egkx43bfvao622uesrv47blkxyc9zl4lomx9yzdr7huxt1cso5sho9sio464v8j0ak8hbya7r857s44t7 == \s\0\0\4\k\f\l\k\2\6\i\1\o\x\n\6\a\e\c\o\8\v\0\z\0\7\n\f\p\5\6\f\y\9\b\k\2\o\4\9\0\u\s\z\d\l\5\l\j\h\n\g\8\o\5\8\7\h\5\y\4\k\k\d\j\u\z\d\b\u\k\0\s\8\t\h\y\y\h\q\d\4\d\w\c\v\3\p\5\2\5\o\r\0\q\p\6\j\d\e\b\9\o\v\g\n\f\9\8\u\v\w\r\s\z\o\q\f\u\g\5\e\g\i\g\y\9\r\y\i\w\a\d\8\y\g\t\6\1\p\u\f\8\x\v\4\r\h\9\h\8\8\g\r\p\q\n\s\4\k\n\r\0\x\n\h\9\k\1\0\h\y\j\e\b\z\9\j\3\d\5\p\9\5\l\o\b\3\g\c\2\l\l\a\l\a\p\i\l\5\q\r\d\t\s\u\n\s\u\p\p\1\q\h\k\4\6\g\l\k\3\8\n\g\7\z\a\9\4\e\w\7\w\h\x\e\r\c\4\x\1\u\0\m\g\5\c\k\m\p\7\a\8\5\7\u\v\m\c\0\2\x\r\v\z\j\u\s\u\p\w\8\x\0\v\5\9\w\0\8\k\a\a\d\m\c\2\c\b\l\s\v\q\6\c\f\e\f\x\1\q\j\e\n\p\9\h\7\8\q\t\r\9\d\3\2\5\c\0\z\j\y\t\k\f\q\n\8\l\2\7\u\z\h\w\v\y\p\p\p\5\8\m\l\g\a\x\o\l\v\l\3\a\j\q\m\s\k\c\d\5\h\k\q\q\v\l\3\v\s\c\c\o\y\2\m\d\d\f\x\w\y\h\p\w\2\f\2\b\d\c\n\a\9\l\n\x\5\x\r\8\w\t\p\b\k\3\j\r\h\p\r\3\b\l\u\c\5\6\w\4\n\r\3\k\t\o\n\3\6\7\e\g\k\x\4\3\b\f\v\a\o\6\2\2\u\e\s\r\v\4\7\b\l\k\x\y\c\9\z\l\4\l\o\m\x\9\y\z\d\r\7\h\u\x\t\1\c\s\o\5\s\h\o\9\s\i\o\4\6\4\v\8\j\0\a\k\8\h\b\y\a\7\r\8\5\7\s\4\4\t\7 ]] 00:15:10.678 00:15:10.678 real 0m1.767s 00:15:10.678 user 0m1.037s 00:15:10.678 sys 0m0.388s 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:10.678 ************************************ 00:15:10.678 END TEST dd_flag_nofollow_forced_aio 00:15:10.678 ************************************ 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:10.678 ************************************ 00:15:10.678 START TEST dd_flag_noatime_forced_aio 00:15:10.678 ************************************ 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # noatime 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1715764102 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1715764102 00:15:10.678 09:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:15:11.679 09:08:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:11.679 [2024-05-15 09:08:24.047417] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:11.679 [2024-05-15 09:08:24.047799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62709 ] 00:15:11.938 [2024-05-15 09:08:24.193370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.938 [2024-05-15 09:08:24.308883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.196  Copying: 512/512 [B] (average 500 kBps) 00:15:12.196 00:15:12.196 09:08:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:12.196 09:08:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1715764102 )) 00:15:12.196 09:08:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:12.196 09:08:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1715764102 )) 00:15:12.196 09:08:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:12.455 [2024-05-15 09:08:24.683723] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:12.455 [2024-05-15 09:08:24.684592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62726 ] 00:15:12.455 [2024-05-15 09:08:24.832163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.714 [2024-05-15 09:08:24.927356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.973  Copying: 512/512 [B] (average 500 kBps) 00:15:12.973 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1715764104 )) 00:15:12.973 00:15:12.973 real 0m2.257s 00:15:12.973 user 0m0.713s 00:15:12.973 sys 0m0.295s 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:12.973 ************************************ 00:15:12.973 END TEST dd_flag_noatime_forced_aio 00:15:12.973 ************************************ 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:12.973 ************************************ 00:15:12.973 START TEST dd_flags_misc_forced_aio 00:15:12.973 ************************************ 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # io 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:12.973 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:12.973 [2024-05-15 09:08:25.343471] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:12.973 [2024-05-15 09:08:25.343744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62747 ] 00:15:13.232 [2024-05-15 09:08:25.480661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.232 [2024-05-15 09:08:25.601292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.491  Copying: 512/512 [B] (average 500 kBps) 00:15:13.491 00:15:13.491 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3kueymlxknydsaowimwevi9gqksyqxb7bwy4f5q7aaf3w6vww9c0m2swaxilj19mu3tdliil9yq87n8xp9hvsjwv53y0iegfyg942x5fm1lrnzh2oo49vkce713blww7kmbnvj7rj202zpdeu58s4vtero9whgqec4apmvadlqv3ulc8u2oxof8rn5rg23ku7jww260suuk4ar4tbdbjfbxqgjaerny0x6t8dc9tada27kqpvw2mmyphnnlqr5h7mx4cvb9zxkmmnvwd8styaskr37py7yannhu7mlhi6s2ph7dbw5f28a6l7e0owt79skl66fx6r782droxqypt1iulrn5tfeqjrk6gen2zm6vpjjf64vjzj3rf4xghheinf7p749uxmxyebtwnkdqadffkecpjnw3bbdhtj1dlfogp7bqxm9sb4ly4wmac044xpge6rbm9hpl9bkc19gc0lp2ahmhqk4mr8yveaho2vsqti54hjzi6lbvz3isb6g35 == \3\k\u\e\y\m\l\x\k\n\y\d\s\a\o\w\i\m\w\e\v\i\9\g\q\k\s\y\q\x\b\7\b\w\y\4\f\5\q\7\a\a\f\3\w\6\v\w\w\9\c\0\m\2\s\w\a\x\i\l\j\1\9\m\u\3\t\d\l\i\i\l\9\y\q\8\7\n\8\x\p\9\h\v\s\j\w\v\5\3\y\0\i\e\g\f\y\g\9\4\2\x\5\f\m\1\l\r\n\z\h\2\o\o\4\9\v\k\c\e\7\1\3\b\l\w\w\7\k\m\b\n\v\j\7\r\j\2\0\2\z\p\d\e\u\5\8\s\4\v\t\e\r\o\9\w\h\g\q\e\c\4\a\p\m\v\a\d\l\q\v\3\u\l\c\8\u\2\o\x\o\f\8\r\n\5\r\g\2\3\k\u\7\j\w\w\2\6\0\s\u\u\k\4\a\r\4\t\b\d\b\j\f\b\x\q\g\j\a\e\r\n\y\0\x\6\t\8\d\c\9\t\a\d\a\2\7\k\q\p\v\w\2\m\m\y\p\h\n\n\l\q\r\5\h\7\m\x\4\c\v\b\9\z\x\k\m\m\n\v\w\d\8\s\t\y\a\s\k\r\3\7\p\y\7\y\a\n\n\h\u\7\m\l\h\i\6\s\2\p\h\7\d\b\w\5\f\2\8\a\6\l\7\e\0\o\w\t\7\9\s\k\l\6\6\f\x\6\r\7\8\2\d\r\o\x\q\y\p\t\1\i\u\l\r\n\5\t\f\e\q\j\r\k\6\g\e\n\2\z\m\6\v\p\j\j\f\6\4\v\j\z\j\3\r\f\4\x\g\h\h\e\i\n\f\7\p\7\4\9\u\x\m\x\y\e\b\t\w\n\k\d\q\a\d\f\f\k\e\c\p\j\n\w\3\b\b\d\h\t\j\1\d\l\f\o\g\p\7\b\q\x\m\9\s\b\4\l\y\4\w\m\a\c\0\4\4\x\p\g\e\6\r\b\m\9\h\p\l\9\b\k\c\1\9\g\c\0\l\p\2\a\h\m\h\q\k\4\m\r\8\y\v\e\a\h\o\2\v\s\q\t\i\5\4\h\j\z\i\6\l\b\v\z\3\i\s\b\6\g\3\5 ]] 00:15:13.491 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:13.491 09:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:13.750 [2024-05-15 09:08:25.954893] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:13.750 [2024-05-15 09:08:25.955155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62760 ] 00:15:13.750 [2024-05-15 09:08:26.089042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.750 [2024-05-15 09:08:26.188452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.268  Copying: 512/512 [B] (average 500 kBps) 00:15:14.268 00:15:14.268 09:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3kueymlxknydsaowimwevi9gqksyqxb7bwy4f5q7aaf3w6vww9c0m2swaxilj19mu3tdliil9yq87n8xp9hvsjwv53y0iegfyg942x5fm1lrnzh2oo49vkce713blww7kmbnvj7rj202zpdeu58s4vtero9whgqec4apmvadlqv3ulc8u2oxof8rn5rg23ku7jww260suuk4ar4tbdbjfbxqgjaerny0x6t8dc9tada27kqpvw2mmyphnnlqr5h7mx4cvb9zxkmmnvwd8styaskr37py7yannhu7mlhi6s2ph7dbw5f28a6l7e0owt79skl66fx6r782droxqypt1iulrn5tfeqjrk6gen2zm6vpjjf64vjzj3rf4xghheinf7p749uxmxyebtwnkdqadffkecpjnw3bbdhtj1dlfogp7bqxm9sb4ly4wmac044xpge6rbm9hpl9bkc19gc0lp2ahmhqk4mr8yveaho2vsqti54hjzi6lbvz3isb6g35 == \3\k\u\e\y\m\l\x\k\n\y\d\s\a\o\w\i\m\w\e\v\i\9\g\q\k\s\y\q\x\b\7\b\w\y\4\f\5\q\7\a\a\f\3\w\6\v\w\w\9\c\0\m\2\s\w\a\x\i\l\j\1\9\m\u\3\t\d\l\i\i\l\9\y\q\8\7\n\8\x\p\9\h\v\s\j\w\v\5\3\y\0\i\e\g\f\y\g\9\4\2\x\5\f\m\1\l\r\n\z\h\2\o\o\4\9\v\k\c\e\7\1\3\b\l\w\w\7\k\m\b\n\v\j\7\r\j\2\0\2\z\p\d\e\u\5\8\s\4\v\t\e\r\o\9\w\h\g\q\e\c\4\a\p\m\v\a\d\l\q\v\3\u\l\c\8\u\2\o\x\o\f\8\r\n\5\r\g\2\3\k\u\7\j\w\w\2\6\0\s\u\u\k\4\a\r\4\t\b\d\b\j\f\b\x\q\g\j\a\e\r\n\y\0\x\6\t\8\d\c\9\t\a\d\a\2\7\k\q\p\v\w\2\m\m\y\p\h\n\n\l\q\r\5\h\7\m\x\4\c\v\b\9\z\x\k\m\m\n\v\w\d\8\s\t\y\a\s\k\r\3\7\p\y\7\y\a\n\n\h\u\7\m\l\h\i\6\s\2\p\h\7\d\b\w\5\f\2\8\a\6\l\7\e\0\o\w\t\7\9\s\k\l\6\6\f\x\6\r\7\8\2\d\r\o\x\q\y\p\t\1\i\u\l\r\n\5\t\f\e\q\j\r\k\6\g\e\n\2\z\m\6\v\p\j\j\f\6\4\v\j\z\j\3\r\f\4\x\g\h\h\e\i\n\f\7\p\7\4\9\u\x\m\x\y\e\b\t\w\n\k\d\q\a\d\f\f\k\e\c\p\j\n\w\3\b\b\d\h\t\j\1\d\l\f\o\g\p\7\b\q\x\m\9\s\b\4\l\y\4\w\m\a\c\0\4\4\x\p\g\e\6\r\b\m\9\h\p\l\9\b\k\c\1\9\g\c\0\l\p\2\a\h\m\h\q\k\4\m\r\8\y\v\e\a\h\o\2\v\s\q\t\i\5\4\h\j\z\i\6\l\b\v\z\3\i\s\b\6\g\3\5 ]] 00:15:14.268 09:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:14.268 09:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:14.268 [2024-05-15 09:08:26.549395] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:14.268 [2024-05-15 09:08:26.549754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62768 ] 00:15:14.268 [2024-05-15 09:08:26.693011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.532 [2024-05-15 09:08:26.801320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.790  Copying: 512/512 [B] (average 166 kBps) 00:15:14.790 00:15:14.790 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3kueymlxknydsaowimwevi9gqksyqxb7bwy4f5q7aaf3w6vww9c0m2swaxilj19mu3tdliil9yq87n8xp9hvsjwv53y0iegfyg942x5fm1lrnzh2oo49vkce713blww7kmbnvj7rj202zpdeu58s4vtero9whgqec4apmvadlqv3ulc8u2oxof8rn5rg23ku7jww260suuk4ar4tbdbjfbxqgjaerny0x6t8dc9tada27kqpvw2mmyphnnlqr5h7mx4cvb9zxkmmnvwd8styaskr37py7yannhu7mlhi6s2ph7dbw5f28a6l7e0owt79skl66fx6r782droxqypt1iulrn5tfeqjrk6gen2zm6vpjjf64vjzj3rf4xghheinf7p749uxmxyebtwnkdqadffkecpjnw3bbdhtj1dlfogp7bqxm9sb4ly4wmac044xpge6rbm9hpl9bkc19gc0lp2ahmhqk4mr8yveaho2vsqti54hjzi6lbvz3isb6g35 == \3\k\u\e\y\m\l\x\k\n\y\d\s\a\o\w\i\m\w\e\v\i\9\g\q\k\s\y\q\x\b\7\b\w\y\4\f\5\q\7\a\a\f\3\w\6\v\w\w\9\c\0\m\2\s\w\a\x\i\l\j\1\9\m\u\3\t\d\l\i\i\l\9\y\q\8\7\n\8\x\p\9\h\v\s\j\w\v\5\3\y\0\i\e\g\f\y\g\9\4\2\x\5\f\m\1\l\r\n\z\h\2\o\o\4\9\v\k\c\e\7\1\3\b\l\w\w\7\k\m\b\n\v\j\7\r\j\2\0\2\z\p\d\e\u\5\8\s\4\v\t\e\r\o\9\w\h\g\q\e\c\4\a\p\m\v\a\d\l\q\v\3\u\l\c\8\u\2\o\x\o\f\8\r\n\5\r\g\2\3\k\u\7\j\w\w\2\6\0\s\u\u\k\4\a\r\4\t\b\d\b\j\f\b\x\q\g\j\a\e\r\n\y\0\x\6\t\8\d\c\9\t\a\d\a\2\7\k\q\p\v\w\2\m\m\y\p\h\n\n\l\q\r\5\h\7\m\x\4\c\v\b\9\z\x\k\m\m\n\v\w\d\8\s\t\y\a\s\k\r\3\7\p\y\7\y\a\n\n\h\u\7\m\l\h\i\6\s\2\p\h\7\d\b\w\5\f\2\8\a\6\l\7\e\0\o\w\t\7\9\s\k\l\6\6\f\x\6\r\7\8\2\d\r\o\x\q\y\p\t\1\i\u\l\r\n\5\t\f\e\q\j\r\k\6\g\e\n\2\z\m\6\v\p\j\j\f\6\4\v\j\z\j\3\r\f\4\x\g\h\h\e\i\n\f\7\p\7\4\9\u\x\m\x\y\e\b\t\w\n\k\d\q\a\d\f\f\k\e\c\p\j\n\w\3\b\b\d\h\t\j\1\d\l\f\o\g\p\7\b\q\x\m\9\s\b\4\l\y\4\w\m\a\c\0\4\4\x\p\g\e\6\r\b\m\9\h\p\l\9\b\k\c\1\9\g\c\0\l\p\2\a\h\m\h\q\k\4\m\r\8\y\v\e\a\h\o\2\v\s\q\t\i\5\4\h\j\z\i\6\l\b\v\z\3\i\s\b\6\g\3\5 ]] 00:15:14.790 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:14.790 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:14.790 [2024-05-15 09:08:27.171241] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:14.790 [2024-05-15 09:08:27.171649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62775 ] 00:15:15.048 [2024-05-15 09:08:27.314460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.048 [2024-05-15 09:08:27.418290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.306  Copying: 512/512 [B] (average 500 kBps) 00:15:15.306 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3kueymlxknydsaowimwevi9gqksyqxb7bwy4f5q7aaf3w6vww9c0m2swaxilj19mu3tdliil9yq87n8xp9hvsjwv53y0iegfyg942x5fm1lrnzh2oo49vkce713blww7kmbnvj7rj202zpdeu58s4vtero9whgqec4apmvadlqv3ulc8u2oxof8rn5rg23ku7jww260suuk4ar4tbdbjfbxqgjaerny0x6t8dc9tada27kqpvw2mmyphnnlqr5h7mx4cvb9zxkmmnvwd8styaskr37py7yannhu7mlhi6s2ph7dbw5f28a6l7e0owt79skl66fx6r782droxqypt1iulrn5tfeqjrk6gen2zm6vpjjf64vjzj3rf4xghheinf7p749uxmxyebtwnkdqadffkecpjnw3bbdhtj1dlfogp7bqxm9sb4ly4wmac044xpge6rbm9hpl9bkc19gc0lp2ahmhqk4mr8yveaho2vsqti54hjzi6lbvz3isb6g35 == \3\k\u\e\y\m\l\x\k\n\y\d\s\a\o\w\i\m\w\e\v\i\9\g\q\k\s\y\q\x\b\7\b\w\y\4\f\5\q\7\a\a\f\3\w\6\v\w\w\9\c\0\m\2\s\w\a\x\i\l\j\1\9\m\u\3\t\d\l\i\i\l\9\y\q\8\7\n\8\x\p\9\h\v\s\j\w\v\5\3\y\0\i\e\g\f\y\g\9\4\2\x\5\f\m\1\l\r\n\z\h\2\o\o\4\9\v\k\c\e\7\1\3\b\l\w\w\7\k\m\b\n\v\j\7\r\j\2\0\2\z\p\d\e\u\5\8\s\4\v\t\e\r\o\9\w\h\g\q\e\c\4\a\p\m\v\a\d\l\q\v\3\u\l\c\8\u\2\o\x\o\f\8\r\n\5\r\g\2\3\k\u\7\j\w\w\2\6\0\s\u\u\k\4\a\r\4\t\b\d\b\j\f\b\x\q\g\j\a\e\r\n\y\0\x\6\t\8\d\c\9\t\a\d\a\2\7\k\q\p\v\w\2\m\m\y\p\h\n\n\l\q\r\5\h\7\m\x\4\c\v\b\9\z\x\k\m\m\n\v\w\d\8\s\t\y\a\s\k\r\3\7\p\y\7\y\a\n\n\h\u\7\m\l\h\i\6\s\2\p\h\7\d\b\w\5\f\2\8\a\6\l\7\e\0\o\w\t\7\9\s\k\l\6\6\f\x\6\r\7\8\2\d\r\o\x\q\y\p\t\1\i\u\l\r\n\5\t\f\e\q\j\r\k\6\g\e\n\2\z\m\6\v\p\j\j\f\6\4\v\j\z\j\3\r\f\4\x\g\h\h\e\i\n\f\7\p\7\4\9\u\x\m\x\y\e\b\t\w\n\k\d\q\a\d\f\f\k\e\c\p\j\n\w\3\b\b\d\h\t\j\1\d\l\f\o\g\p\7\b\q\x\m\9\s\b\4\l\y\4\w\m\a\c\0\4\4\x\p\g\e\6\r\b\m\9\h\p\l\9\b\k\c\1\9\g\c\0\l\p\2\a\h\m\h\q\k\4\m\r\8\y\v\e\a\h\o\2\v\s\q\t\i\5\4\h\j\z\i\6\l\b\v\z\3\i\s\b\6\g\3\5 ]] 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:15.306 09:08:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:15.565 [2024-05-15 09:08:27.788111] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:15.565 [2024-05-15 09:08:27.788389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62788 ] 00:15:15.565 [2024-05-15 09:08:27.925695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.823 [2024-05-15 09:08:28.022207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.081  Copying: 512/512 [B] (average 500 kBps) 00:15:16.081 00:15:16.081 09:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zmj7jb1anjvm61po33cfonju0yp039dvjabtjyos71ragowvk4csuufxr4qc5zw91a9ldidvve8t0ugdj1zwo8bn2fnjpbm0k0771739f1h8qh3zhnrf3ptbnjgb6zyz8jlblb7zkoqxtg38u1iul3iq4d6se5emd6hq5f86noq8xsff1bx58tvy22mlukvj7lcptxwqgrd32vzgjks8oscbs25tosvo6aczb9tk37433x425pc44b9ajravzbr8rscke2zcq7j3zlvedf0a6u9u6f26w7x7qb9syxo1u6y6wis5g2jgujlx8sr81cxn8cdrx4hcvjupqr8yv9d8al17b81p6ns6l730di4ptei3qn9zcb1bo5vkxwwzdstdhcmifn8z3pstbenga8qcs0zg9zyvac4pd9y1hfp5cyuv2v5jp80b4kpy92wrs5n37hw7qs3xr8q9mo4h5s99whj6kg2ym8ithyli7ta5og13e84be4wp013lmag69l94 == \z\m\j\7\j\b\1\a\n\j\v\m\6\1\p\o\3\3\c\f\o\n\j\u\0\y\p\0\3\9\d\v\j\a\b\t\j\y\o\s\7\1\r\a\g\o\w\v\k\4\c\s\u\u\f\x\r\4\q\c\5\z\w\9\1\a\9\l\d\i\d\v\v\e\8\t\0\u\g\d\j\1\z\w\o\8\b\n\2\f\n\j\p\b\m\0\k\0\7\7\1\7\3\9\f\1\h\8\q\h\3\z\h\n\r\f\3\p\t\b\n\j\g\b\6\z\y\z\8\j\l\b\l\b\7\z\k\o\q\x\t\g\3\8\u\1\i\u\l\3\i\q\4\d\6\s\e\5\e\m\d\6\h\q\5\f\8\6\n\o\q\8\x\s\f\f\1\b\x\5\8\t\v\y\2\2\m\l\u\k\v\j\7\l\c\p\t\x\w\q\g\r\d\3\2\v\z\g\j\k\s\8\o\s\c\b\s\2\5\t\o\s\v\o\6\a\c\z\b\9\t\k\3\7\4\3\3\x\4\2\5\p\c\4\4\b\9\a\j\r\a\v\z\b\r\8\r\s\c\k\e\2\z\c\q\7\j\3\z\l\v\e\d\f\0\a\6\u\9\u\6\f\2\6\w\7\x\7\q\b\9\s\y\x\o\1\u\6\y\6\w\i\s\5\g\2\j\g\u\j\l\x\8\s\r\8\1\c\x\n\8\c\d\r\x\4\h\c\v\j\u\p\q\r\8\y\v\9\d\8\a\l\1\7\b\8\1\p\6\n\s\6\l\7\3\0\d\i\4\p\t\e\i\3\q\n\9\z\c\b\1\b\o\5\v\k\x\w\w\z\d\s\t\d\h\c\m\i\f\n\8\z\3\p\s\t\b\e\n\g\a\8\q\c\s\0\z\g\9\z\y\v\a\c\4\p\d\9\y\1\h\f\p\5\c\y\u\v\2\v\5\j\p\8\0\b\4\k\p\y\9\2\w\r\s\5\n\3\7\h\w\7\q\s\3\x\r\8\q\9\m\o\4\h\5\s\9\9\w\h\j\6\k\g\2\y\m\8\i\t\h\y\l\i\7\t\a\5\o\g\1\3\e\8\4\b\e\4\w\p\0\1\3\l\m\a\g\6\9\l\9\4 ]] 00:15:16.081 09:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:16.081 09:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:16.081 [2024-05-15 09:08:28.379392] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:16.081 [2024-05-15 09:08:28.379804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62790 ] 00:15:16.081 [2024-05-15 09:08:28.522183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.339 [2024-05-15 09:08:28.623550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.597  Copying: 512/512 [B] (average 500 kBps) 00:15:16.597 00:15:16.597 09:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zmj7jb1anjvm61po33cfonju0yp039dvjabtjyos71ragowvk4csuufxr4qc5zw91a9ldidvve8t0ugdj1zwo8bn2fnjpbm0k0771739f1h8qh3zhnrf3ptbnjgb6zyz8jlblb7zkoqxtg38u1iul3iq4d6se5emd6hq5f86noq8xsff1bx58tvy22mlukvj7lcptxwqgrd32vzgjks8oscbs25tosvo6aczb9tk37433x425pc44b9ajravzbr8rscke2zcq7j3zlvedf0a6u9u6f26w7x7qb9syxo1u6y6wis5g2jgujlx8sr81cxn8cdrx4hcvjupqr8yv9d8al17b81p6ns6l730di4ptei3qn9zcb1bo5vkxwwzdstdhcmifn8z3pstbenga8qcs0zg9zyvac4pd9y1hfp5cyuv2v5jp80b4kpy92wrs5n37hw7qs3xr8q9mo4h5s99whj6kg2ym8ithyli7ta5og13e84be4wp013lmag69l94 == \z\m\j\7\j\b\1\a\n\j\v\m\6\1\p\o\3\3\c\f\o\n\j\u\0\y\p\0\3\9\d\v\j\a\b\t\j\y\o\s\7\1\r\a\g\o\w\v\k\4\c\s\u\u\f\x\r\4\q\c\5\z\w\9\1\a\9\l\d\i\d\v\v\e\8\t\0\u\g\d\j\1\z\w\o\8\b\n\2\f\n\j\p\b\m\0\k\0\7\7\1\7\3\9\f\1\h\8\q\h\3\z\h\n\r\f\3\p\t\b\n\j\g\b\6\z\y\z\8\j\l\b\l\b\7\z\k\o\q\x\t\g\3\8\u\1\i\u\l\3\i\q\4\d\6\s\e\5\e\m\d\6\h\q\5\f\8\6\n\o\q\8\x\s\f\f\1\b\x\5\8\t\v\y\2\2\m\l\u\k\v\j\7\l\c\p\t\x\w\q\g\r\d\3\2\v\z\g\j\k\s\8\o\s\c\b\s\2\5\t\o\s\v\o\6\a\c\z\b\9\t\k\3\7\4\3\3\x\4\2\5\p\c\4\4\b\9\a\j\r\a\v\z\b\r\8\r\s\c\k\e\2\z\c\q\7\j\3\z\l\v\e\d\f\0\a\6\u\9\u\6\f\2\6\w\7\x\7\q\b\9\s\y\x\o\1\u\6\y\6\w\i\s\5\g\2\j\g\u\j\l\x\8\s\r\8\1\c\x\n\8\c\d\r\x\4\h\c\v\j\u\p\q\r\8\y\v\9\d\8\a\l\1\7\b\8\1\p\6\n\s\6\l\7\3\0\d\i\4\p\t\e\i\3\q\n\9\z\c\b\1\b\o\5\v\k\x\w\w\z\d\s\t\d\h\c\m\i\f\n\8\z\3\p\s\t\b\e\n\g\a\8\q\c\s\0\z\g\9\z\y\v\a\c\4\p\d\9\y\1\h\f\p\5\c\y\u\v\2\v\5\j\p\8\0\b\4\k\p\y\9\2\w\r\s\5\n\3\7\h\w\7\q\s\3\x\r\8\q\9\m\o\4\h\5\s\9\9\w\h\j\6\k\g\2\y\m\8\i\t\h\y\l\i\7\t\a\5\o\g\1\3\e\8\4\b\e\4\w\p\0\1\3\l\m\a\g\6\9\l\9\4 ]] 00:15:16.597 09:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:16.597 09:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:16.597 [2024-05-15 09:08:28.957133] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:16.597 [2024-05-15 09:08:28.957381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62803 ] 00:15:16.856 [2024-05-15 09:08:29.091558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.856 [2024-05-15 09:08:29.191996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.114  Copying: 512/512 [B] (average 250 kBps) 00:15:17.114 00:15:17.114 09:08:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zmj7jb1anjvm61po33cfonju0yp039dvjabtjyos71ragowvk4csuufxr4qc5zw91a9ldidvve8t0ugdj1zwo8bn2fnjpbm0k0771739f1h8qh3zhnrf3ptbnjgb6zyz8jlblb7zkoqxtg38u1iul3iq4d6se5emd6hq5f86noq8xsff1bx58tvy22mlukvj7lcptxwqgrd32vzgjks8oscbs25tosvo6aczb9tk37433x425pc44b9ajravzbr8rscke2zcq7j3zlvedf0a6u9u6f26w7x7qb9syxo1u6y6wis5g2jgujlx8sr81cxn8cdrx4hcvjupqr8yv9d8al17b81p6ns6l730di4ptei3qn9zcb1bo5vkxwwzdstdhcmifn8z3pstbenga8qcs0zg9zyvac4pd9y1hfp5cyuv2v5jp80b4kpy92wrs5n37hw7qs3xr8q9mo4h5s99whj6kg2ym8ithyli7ta5og13e84be4wp013lmag69l94 == \z\m\j\7\j\b\1\a\n\j\v\m\6\1\p\o\3\3\c\f\o\n\j\u\0\y\p\0\3\9\d\v\j\a\b\t\j\y\o\s\7\1\r\a\g\o\w\v\k\4\c\s\u\u\f\x\r\4\q\c\5\z\w\9\1\a\9\l\d\i\d\v\v\e\8\t\0\u\g\d\j\1\z\w\o\8\b\n\2\f\n\j\p\b\m\0\k\0\7\7\1\7\3\9\f\1\h\8\q\h\3\z\h\n\r\f\3\p\t\b\n\j\g\b\6\z\y\z\8\j\l\b\l\b\7\z\k\o\q\x\t\g\3\8\u\1\i\u\l\3\i\q\4\d\6\s\e\5\e\m\d\6\h\q\5\f\8\6\n\o\q\8\x\s\f\f\1\b\x\5\8\t\v\y\2\2\m\l\u\k\v\j\7\l\c\p\t\x\w\q\g\r\d\3\2\v\z\g\j\k\s\8\o\s\c\b\s\2\5\t\o\s\v\o\6\a\c\z\b\9\t\k\3\7\4\3\3\x\4\2\5\p\c\4\4\b\9\a\j\r\a\v\z\b\r\8\r\s\c\k\e\2\z\c\q\7\j\3\z\l\v\e\d\f\0\a\6\u\9\u\6\f\2\6\w\7\x\7\q\b\9\s\y\x\o\1\u\6\y\6\w\i\s\5\g\2\j\g\u\j\l\x\8\s\r\8\1\c\x\n\8\c\d\r\x\4\h\c\v\j\u\p\q\r\8\y\v\9\d\8\a\l\1\7\b\8\1\p\6\n\s\6\l\7\3\0\d\i\4\p\t\e\i\3\q\n\9\z\c\b\1\b\o\5\v\k\x\w\w\z\d\s\t\d\h\c\m\i\f\n\8\z\3\p\s\t\b\e\n\g\a\8\q\c\s\0\z\g\9\z\y\v\a\c\4\p\d\9\y\1\h\f\p\5\c\y\u\v\2\v\5\j\p\8\0\b\4\k\p\y\9\2\w\r\s\5\n\3\7\h\w\7\q\s\3\x\r\8\q\9\m\o\4\h\5\s\9\9\w\h\j\6\k\g\2\y\m\8\i\t\h\y\l\i\7\t\a\5\o\g\1\3\e\8\4\b\e\4\w\p\0\1\3\l\m\a\g\6\9\l\9\4 ]] 00:15:17.114 09:08:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:17.114 09:08:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:17.114 [2024-05-15 09:08:29.542058] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:17.114 [2024-05-15 09:08:29.542371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:15:17.372 [2024-05-15 09:08:29.688581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.372 [2024-05-15 09:08:29.814266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.889  Copying: 512/512 [B] (average 500 kBps) 00:15:17.889 00:15:17.889 ************************************ 00:15:17.889 END TEST dd_flags_misc_forced_aio 00:15:17.889 ************************************ 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zmj7jb1anjvm61po33cfonju0yp039dvjabtjyos71ragowvk4csuufxr4qc5zw91a9ldidvve8t0ugdj1zwo8bn2fnjpbm0k0771739f1h8qh3zhnrf3ptbnjgb6zyz8jlblb7zkoqxtg38u1iul3iq4d6se5emd6hq5f86noq8xsff1bx58tvy22mlukvj7lcptxwqgrd32vzgjks8oscbs25tosvo6aczb9tk37433x425pc44b9ajravzbr8rscke2zcq7j3zlvedf0a6u9u6f26w7x7qb9syxo1u6y6wis5g2jgujlx8sr81cxn8cdrx4hcvjupqr8yv9d8al17b81p6ns6l730di4ptei3qn9zcb1bo5vkxwwzdstdhcmifn8z3pstbenga8qcs0zg9zyvac4pd9y1hfp5cyuv2v5jp80b4kpy92wrs5n37hw7qs3xr8q9mo4h5s99whj6kg2ym8ithyli7ta5og13e84be4wp013lmag69l94 == \z\m\j\7\j\b\1\a\n\j\v\m\6\1\p\o\3\3\c\f\o\n\j\u\0\y\p\0\3\9\d\v\j\a\b\t\j\y\o\s\7\1\r\a\g\o\w\v\k\4\c\s\u\u\f\x\r\4\q\c\5\z\w\9\1\a\9\l\d\i\d\v\v\e\8\t\0\u\g\d\j\1\z\w\o\8\b\n\2\f\n\j\p\b\m\0\k\0\7\7\1\7\3\9\f\1\h\8\q\h\3\z\h\n\r\f\3\p\t\b\n\j\g\b\6\z\y\z\8\j\l\b\l\b\7\z\k\o\q\x\t\g\3\8\u\1\i\u\l\3\i\q\4\d\6\s\e\5\e\m\d\6\h\q\5\f\8\6\n\o\q\8\x\s\f\f\1\b\x\5\8\t\v\y\2\2\m\l\u\k\v\j\7\l\c\p\t\x\w\q\g\r\d\3\2\v\z\g\j\k\s\8\o\s\c\b\s\2\5\t\o\s\v\o\6\a\c\z\b\9\t\k\3\7\4\3\3\x\4\2\5\p\c\4\4\b\9\a\j\r\a\v\z\b\r\8\r\s\c\k\e\2\z\c\q\7\j\3\z\l\v\e\d\f\0\a\6\u\9\u\6\f\2\6\w\7\x\7\q\b\9\s\y\x\o\1\u\6\y\6\w\i\s\5\g\2\j\g\u\j\l\x\8\s\r\8\1\c\x\n\8\c\d\r\x\4\h\c\v\j\u\p\q\r\8\y\v\9\d\8\a\l\1\7\b\8\1\p\6\n\s\6\l\7\3\0\d\i\4\p\t\e\i\3\q\n\9\z\c\b\1\b\o\5\v\k\x\w\w\z\d\s\t\d\h\c\m\i\f\n\8\z\3\p\s\t\b\e\n\g\a\8\q\c\s\0\z\g\9\z\y\v\a\c\4\p\d\9\y\1\h\f\p\5\c\y\u\v\2\v\5\j\p\8\0\b\4\k\p\y\9\2\w\r\s\5\n\3\7\h\w\7\q\s\3\x\r\8\q\9\m\o\4\h\5\s\9\9\w\h\j\6\k\g\2\y\m\8\i\t\h\y\l\i\7\t\a\5\o\g\1\3\e\8\4\b\e\4\w\p\0\1\3\l\m\a\g\6\9\l\9\4 ]] 00:15:17.889 00:15:17.889 real 0m4.844s 00:15:17.889 user 0m2.794s 00:15:17.889 sys 0m1.053s 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:17.889 ************************************ 00:15:17.889 END TEST spdk_dd_posix 00:15:17.889 ************************************ 00:15:17.889 00:15:17.889 real 0m22.146s 00:15:17.889 user 0m11.620s 00:15:17.889 sys 0m6.113s 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:17.889 09:08:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:17.889 09:08:30 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:17.889 09:08:30 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:17.889 09:08:30 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:17.889 09:08:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:17.889 ************************************ 00:15:17.889 START TEST spdk_dd_malloc 00:15:17.889 ************************************ 00:15:17.889 09:08:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:17.889 * Looking for test storage... 00:15:18.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:15:18.148 ************************************ 00:15:18.148 START TEST dd_malloc_copy 00:15:18.148 ************************************ 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # malloc_copy 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:18.148 09:08:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:18.148 [2024-05-15 09:08:30.402110] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:18.148 [2024-05-15 09:08:30.402373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62879 ] 00:15:18.148 { 00:15:18.148 "subsystems": [ 00:15:18.148 { 00:15:18.148 "subsystem": "bdev", 00:15:18.148 "config": [ 00:15:18.148 { 00:15:18.148 "params": { 00:15:18.148 "block_size": 512, 00:15:18.148 "num_blocks": 1048576, 00:15:18.148 "name": "malloc0" 00:15:18.148 }, 00:15:18.148 "method": "bdev_malloc_create" 00:15:18.148 }, 00:15:18.148 { 00:15:18.148 "params": { 00:15:18.148 "block_size": 512, 00:15:18.148 "num_blocks": 1048576, 00:15:18.148 "name": "malloc1" 00:15:18.148 }, 00:15:18.148 "method": "bdev_malloc_create" 00:15:18.148 }, 00:15:18.148 { 00:15:18.148 "method": "bdev_wait_for_examine" 00:15:18.148 } 00:15:18.148 ] 00:15:18.148 } 00:15:18.148 ] 00:15:18.148 } 00:15:18.148 [2024-05-15 09:08:30.537461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.407 [2024-05-15 09:08:30.641712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.847  Copying: 206/512 [MB] (206 MBps) Copying: 392/512 [MB] (186 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:15:21.847 00:15:21.847 09:08:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:15:21.847 09:08:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:15:21.847 09:08:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:21.847 09:08:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:21.847 [2024-05-15 09:08:34.125928] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:21.847 [2024-05-15 09:08:34.127021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62932 ] 00:15:21.847 { 00:15:21.847 "subsystems": [ 00:15:21.847 { 00:15:21.847 "subsystem": "bdev", 00:15:21.847 "config": [ 00:15:21.847 { 00:15:21.847 "params": { 00:15:21.847 "block_size": 512, 00:15:21.847 "num_blocks": 1048576, 00:15:21.847 "name": "malloc0" 00:15:21.847 }, 00:15:21.847 "method": "bdev_malloc_create" 00:15:21.847 }, 00:15:21.847 { 00:15:21.847 "params": { 00:15:21.847 "block_size": 512, 00:15:21.847 "num_blocks": 1048576, 00:15:21.847 "name": "malloc1" 00:15:21.847 }, 00:15:21.847 "method": "bdev_malloc_create" 00:15:21.847 }, 00:15:21.847 { 00:15:21.847 "method": "bdev_wait_for_examine" 00:15:21.847 } 00:15:21.847 ] 00:15:21.847 } 00:15:21.847 ] 00:15:21.847 } 00:15:21.847 [2024-05-15 09:08:34.268495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.105 [2024-05-15 09:08:34.391498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.261  Copying: 228/512 [MB] (228 MBps) Copying: 457/512 [MB] (228 MBps) Copying: 512/512 [MB] (average 229 MBps) 00:15:25.261 00:15:25.261 00:15:25.261 real 0m7.169s 00:15:25.261 user 0m6.259s 00:15:25.261 sys 0m0.718s 00:15:25.261 09:08:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:25.261 ************************************ 00:15:25.261 END TEST dd_malloc_copy 00:15:25.261 ************************************ 00:15:25.261 09:08:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:25.261 ************************************ 00:15:25.261 END TEST spdk_dd_malloc 00:15:25.261 ************************************ 00:15:25.261 00:15:25.261 real 0m7.323s 00:15:25.261 user 0m6.325s 00:15:25.261 sys 0m0.808s 00:15:25.261 09:08:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:25.261 09:08:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:15:25.261 09:08:37 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:15:25.261 09:08:37 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:15:25.261 09:08:37 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:25.261 09:08:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:25.261 ************************************ 00:15:25.261 START TEST spdk_dd_bdev_to_bdev 00:15:25.261 ************************************ 00:15:25.261 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:15:25.519 * Looking for test storage... 00:15:25.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:25.519 ************************************ 00:15:25.519 START TEST dd_inflate_file 00:15:25.519 ************************************ 00:15:25.519 09:08:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:15:25.519 [2024-05-15 09:08:37.811210] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:25.519 [2024-05-15 09:08:37.811596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63037 ] 00:15:25.519 [2024-05-15 09:08:37.959458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.777 [2024-05-15 09:08:38.081291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.034  Copying: 64/64 [MB] (average 1333 MBps) 00:15:26.034 00:15:26.034 00:15:26.034 real 0m0.664s 00:15:26.034 user 0m0.419s 00:15:26.034 sys 0m0.295s 00:15:26.034 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:26.034 ************************************ 00:15:26.034 END TEST dd_inflate_file 00:15:26.034 ************************************ 00:15:26.034 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:15:26.034 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:26.292 ************************************ 00:15:26.292 START TEST dd_copy_to_out_bdev 00:15:26.292 ************************************ 00:15:26.292 09:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:15:26.292 { 00:15:26.292 "subsystems": [ 00:15:26.292 { 00:15:26.292 "subsystem": "bdev", 00:15:26.292 "config": [ 00:15:26.292 { 00:15:26.292 "params": { 00:15:26.292 "trtype": "pcie", 00:15:26.292 "traddr": "0000:00:10.0", 00:15:26.292 "name": "Nvme0" 00:15:26.292 }, 00:15:26.292 "method": "bdev_nvme_attach_controller" 00:15:26.292 }, 00:15:26.292 { 00:15:26.292 "params": { 00:15:26.292 "trtype": "pcie", 00:15:26.292 "traddr": "0000:00:11.0", 00:15:26.292 "name": "Nvme1" 00:15:26.292 }, 00:15:26.292 "method": "bdev_nvme_attach_controller" 00:15:26.292 }, 00:15:26.292 { 00:15:26.292 "method": "bdev_wait_for_examine" 00:15:26.292 } 00:15:26.292 ] 00:15:26.292 } 00:15:26.292 ] 00:15:26.292 } 00:15:26.292 [2024-05-15 09:08:38.548029] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:26.292 [2024-05-15 09:08:38.548347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63070 ] 00:15:26.292 [2024-05-15 09:08:38.689849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.550 [2024-05-15 09:08:38.822992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.798  Copying: 64/64 [MB] (average 71 MBps) 00:15:27.798 00:15:27.798 00:15:27.798 real 0m1.689s 00:15:27.798 user 0m1.453s 00:15:27.798 sys 0m1.227s 00:15:27.798 ************************************ 00:15:27.798 END TEST dd_copy_to_out_bdev 00:15:27.798 ************************************ 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:27.798 ************************************ 00:15:27.798 START TEST dd_offset_magic 00:15:27.798 ************************************ 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # offset_magic 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:15:27.798 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:28.056 09:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:28.056 [2024-05-15 09:08:40.280403] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:28.056 [2024-05-15 09:08:40.280640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63110 ] 00:15:28.056 { 00:15:28.056 "subsystems": [ 00:15:28.056 { 00:15:28.056 "subsystem": "bdev", 00:15:28.056 "config": [ 00:15:28.056 { 00:15:28.056 "params": { 00:15:28.056 "trtype": "pcie", 00:15:28.056 "traddr": "0000:00:10.0", 00:15:28.056 "name": "Nvme0" 00:15:28.056 }, 00:15:28.056 "method": "bdev_nvme_attach_controller" 00:15:28.056 }, 00:15:28.056 { 00:15:28.056 "params": { 00:15:28.056 "trtype": "pcie", 00:15:28.056 "traddr": "0000:00:11.0", 00:15:28.056 "name": "Nvme1" 00:15:28.056 }, 00:15:28.056 "method": "bdev_nvme_attach_controller" 00:15:28.056 }, 00:15:28.056 { 00:15:28.056 "method": "bdev_wait_for_examine" 00:15:28.056 } 00:15:28.056 ] 00:15:28.056 } 00:15:28.056 ] 00:15:28.056 } 00:15:28.056 [2024-05-15 09:08:40.417145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.315 [2024-05-15 09:08:40.542810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.831  Copying: 65/65 [MB] (average 792 MBps) 00:15:28.831 00:15:28.831 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:15:28.831 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:15:28.831 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:28.831 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:28.831 [2024-05-15 09:08:41.147239] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:28.831 [2024-05-15 09:08:41.147562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63130 ] 00:15:28.831 { 00:15:28.831 "subsystems": [ 00:15:28.831 { 00:15:28.831 "subsystem": "bdev", 00:15:28.831 "config": [ 00:15:28.832 { 00:15:28.832 "params": { 00:15:28.832 "trtype": "pcie", 00:15:28.832 "traddr": "0000:00:10.0", 00:15:28.832 "name": "Nvme0" 00:15:28.832 }, 00:15:28.832 "method": "bdev_nvme_attach_controller" 00:15:28.832 }, 00:15:28.832 { 00:15:28.832 "params": { 00:15:28.832 "trtype": "pcie", 00:15:28.832 "traddr": "0000:00:11.0", 00:15:28.832 "name": "Nvme1" 00:15:28.832 }, 00:15:28.832 "method": "bdev_nvme_attach_controller" 00:15:28.832 }, 00:15:28.832 { 00:15:28.832 "method": "bdev_wait_for_examine" 00:15:28.832 } 00:15:28.832 ] 00:15:28.832 } 00:15:28.832 ] 00:15:28.832 } 00:15:29.091 [2024-05-15 09:08:41.286350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.091 [2024-05-15 09:08:41.388740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.636  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:29.636 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:29.636 09:08:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:29.636 [2024-05-15 09:08:41.883022] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:29.636 [2024-05-15 09:08:41.883364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63152 ] 00:15:29.636 { 00:15:29.636 "subsystems": [ 00:15:29.636 { 00:15:29.636 "subsystem": "bdev", 00:15:29.636 "config": [ 00:15:29.636 { 00:15:29.636 "params": { 00:15:29.636 "trtype": "pcie", 00:15:29.636 "traddr": "0000:00:10.0", 00:15:29.636 "name": "Nvme0" 00:15:29.636 }, 00:15:29.636 "method": "bdev_nvme_attach_controller" 00:15:29.636 }, 00:15:29.636 { 00:15:29.636 "params": { 00:15:29.636 "trtype": "pcie", 00:15:29.636 "traddr": "0000:00:11.0", 00:15:29.636 "name": "Nvme1" 00:15:29.636 }, 00:15:29.636 "method": "bdev_nvme_attach_controller" 00:15:29.636 }, 00:15:29.636 { 00:15:29.636 "method": "bdev_wait_for_examine" 00:15:29.636 } 00:15:29.636 ] 00:15:29.636 } 00:15:29.636 ] 00:15:29.636 } 00:15:29.636 [2024-05-15 09:08:42.030280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.894 [2024-05-15 09:08:42.148056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.411  Copying: 65/65 [MB] (average 1048 MBps) 00:15:30.411 00:15:30.411 09:08:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:15:30.411 09:08:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:15:30.411 09:08:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:30.411 09:08:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:30.411 [2024-05-15 09:08:42.715369] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:30.411 [2024-05-15 09:08:42.715701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63166 ] 00:15:30.411 { 00:15:30.411 "subsystems": [ 00:15:30.411 { 00:15:30.411 "subsystem": "bdev", 00:15:30.411 "config": [ 00:15:30.411 { 00:15:30.411 "params": { 00:15:30.411 "trtype": "pcie", 00:15:30.411 "traddr": "0000:00:10.0", 00:15:30.411 "name": "Nvme0" 00:15:30.411 }, 00:15:30.411 "method": "bdev_nvme_attach_controller" 00:15:30.411 }, 00:15:30.411 { 00:15:30.411 "params": { 00:15:30.411 "trtype": "pcie", 00:15:30.411 "traddr": "0000:00:11.0", 00:15:30.411 "name": "Nvme1" 00:15:30.411 }, 00:15:30.411 "method": "bdev_nvme_attach_controller" 00:15:30.411 }, 00:15:30.411 { 00:15:30.411 "method": "bdev_wait_for_examine" 00:15:30.411 } 00:15:30.411 ] 00:15:30.411 } 00:15:30.411 ] 00:15:30.411 } 00:15:30.411 [2024-05-15 09:08:42.850399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.670 [2024-05-15 09:08:42.950546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.927  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:30.927 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:15:31.187 00:15:31.187 real 0m3.135s 00:15:31.187 user 0m2.292s 00:15:31.187 sys 0m0.863s 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:31.187 ************************************ 00:15:31.187 END TEST dd_offset_magic 00:15:31.187 ************************************ 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:31.187 09:08:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:31.187 [2024-05-15 09:08:43.473072] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:31.187 [2024-05-15 09:08:43.473400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63198 ] 00:15:31.187 { 00:15:31.187 "subsystems": [ 00:15:31.187 { 00:15:31.187 "subsystem": "bdev", 00:15:31.187 "config": [ 00:15:31.187 { 00:15:31.187 "params": { 00:15:31.187 "trtype": "pcie", 00:15:31.187 "traddr": "0000:00:10.0", 00:15:31.187 "name": "Nvme0" 00:15:31.187 }, 00:15:31.187 "method": "bdev_nvme_attach_controller" 00:15:31.187 }, 00:15:31.187 { 00:15:31.187 "params": { 00:15:31.187 "trtype": "pcie", 00:15:31.187 "traddr": "0000:00:11.0", 00:15:31.187 "name": "Nvme1" 00:15:31.187 }, 00:15:31.187 "method": "bdev_nvme_attach_controller" 00:15:31.187 }, 00:15:31.187 { 00:15:31.187 "method": "bdev_wait_for_examine" 00:15:31.187 } 00:15:31.187 ] 00:15:31.187 } 00:15:31.187 ] 00:15:31.187 } 00:15:31.187 [2024-05-15 09:08:43.617826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.445 [2024-05-15 09:08:43.719855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.703  Copying: 5120/5120 [kB] (average 1000 MBps) 00:15:31.703 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:15:31.703 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:15:31.999 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:15:31.999 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:31.999 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:31.999 [2024-05-15 09:08:44.187863] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:31.999 [2024-05-15 09:08:44.188143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63219 ] 00:15:31.999 { 00:15:31.999 "subsystems": [ 00:15:31.999 { 00:15:31.999 "subsystem": "bdev", 00:15:31.999 "config": [ 00:15:31.999 { 00:15:31.999 "params": { 00:15:31.999 "trtype": "pcie", 00:15:31.999 "traddr": "0000:00:10.0", 00:15:31.999 "name": "Nvme0" 00:15:31.999 }, 00:15:31.999 "method": "bdev_nvme_attach_controller" 00:15:31.999 }, 00:15:31.999 { 00:15:31.999 "params": { 00:15:31.999 "trtype": "pcie", 00:15:31.999 "traddr": "0000:00:11.0", 00:15:31.999 "name": "Nvme1" 00:15:31.999 }, 00:15:31.999 "method": "bdev_nvme_attach_controller" 00:15:31.999 }, 00:15:31.999 { 00:15:31.999 "method": "bdev_wait_for_examine" 00:15:31.999 } 00:15:31.999 ] 00:15:31.999 } 00:15:31.999 ] 00:15:31.999 } 00:15:31.999 [2024-05-15 09:08:44.324905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.999 [2024-05-15 09:08:44.428839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.515  Copying: 5120/5120 [kB] (average 833 MBps) 00:15:32.515 00:15:32.515 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:15:32.515 ************************************ 00:15:32.515 END TEST spdk_dd_bdev_to_bdev 00:15:32.515 ************************************ 00:15:32.515 00:15:32.515 real 0m7.235s 00:15:32.515 user 0m5.304s 00:15:32.515 sys 0m3.064s 00:15:32.515 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:32.515 09:08:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:32.515 09:08:44 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:15:32.515 09:08:44 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:15:32.515 09:08:44 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:32.515 09:08:44 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:32.515 09:08:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:32.515 ************************************ 00:15:32.515 START TEST spdk_dd_uring 00:15:32.515 ************************************ 00:15:32.515 09:08:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:15:32.774 * Looking for test storage... 00:15:32.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:32.774 09:08:45 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.774 09:08:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.774 09:08:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.774 09:08:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.774 09:08:45 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.774 09:08:45 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:15:32.775 ************************************ 00:15:32.775 START TEST dd_uring_copy 00:15:32.775 ************************************ 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1122 -- # uring_zram_copy 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=ct5539qr4yuvtybge0n7wyqc3rcgej6s71nvdi0usync4yqw42zc5up1497jyxgrnhrshpa5ua7dcmg1zyww5vji43kyth53303055w2w83k6y3ovu580yc3m7odsdsd9ai4p3shlt6oq736xrr7uklf18he78z432n43zz416toxbjjtwtz1mf3c4rszv7qqz1alhd8yoc3nutwq9mq5h7ltobqjyfb75ruo1b7d4pouno389muotugv686t3rlhcflg81sl9x5e476ylicyagn2h1dnz5i8xhkn9jo6vnghujs42ui7kkn88ymjx01qnol6hglmzzw8mv35nbiry2a7v3m85l4cm6iddnkod80qn9hkoxvu1qio67ofg6shnx0cgin1p7gafoitbyez2tu99fjnfqpfpqyrf9olsyj3djdv90v808clhh5ujig0zh2e8ejwh3g16jv3j8naxi8o12fpcgcolyshhno0wcpuffsyzqv9aep1tie7cw4rdj1m0yf85sge9yujw2zsa1t9um3l3pdf2u5jyxfzb6wscbvctpa53nybvzdooimo9pfib34mp5lj662aa7ents3573af7z9x7itp3ms05tlxfrcnni45el5kjuved8qczf310kpa7wzgfx1srf4lzjbwxmho4yirq7dthhxc3wp05ps3bqiyewtzpvyswwfchiadfb69eilsxny0wm8yqnxt2vj687v2ofn6lpd8h7d7al7fwhciyh0zv0ohwmmant0v1rov468n4xoqmug5i6qsby43j8hgwhs63spjdchweagdsv4k0li3o1d3qwbekx18jfy6o3r05ymkj9ytyndvup3uv7o67k6w7sknu0u361yiismix5fu1kf5wuktfa8whf6bokaj6dj3pt1ifriqtqqdmmqmlkz56m7549yttwbgbv8w3hj0xn894tgwbudgwoe64qldzpc1lmkz4dnzz3wvttozkf9mx8o2246hbgnsp3377q0rrwi1qo2 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo ct5539qr4yuvtybge0n7wyqc3rcgej6s71nvdi0usync4yqw42zc5up1497jyxgrnhrshpa5ua7dcmg1zyww5vji43kyth53303055w2w83k6y3ovu580yc3m7odsdsd9ai4p3shlt6oq736xrr7uklf18he78z432n43zz416toxbjjtwtz1mf3c4rszv7qqz1alhd8yoc3nutwq9mq5h7ltobqjyfb75ruo1b7d4pouno389muotugv686t3rlhcflg81sl9x5e476ylicyagn2h1dnz5i8xhkn9jo6vnghujs42ui7kkn88ymjx01qnol6hglmzzw8mv35nbiry2a7v3m85l4cm6iddnkod80qn9hkoxvu1qio67ofg6shnx0cgin1p7gafoitbyez2tu99fjnfqpfpqyrf9olsyj3djdv90v808clhh5ujig0zh2e8ejwh3g16jv3j8naxi8o12fpcgcolyshhno0wcpuffsyzqv9aep1tie7cw4rdj1m0yf85sge9yujw2zsa1t9um3l3pdf2u5jyxfzb6wscbvctpa53nybvzdooimo9pfib34mp5lj662aa7ents3573af7z9x7itp3ms05tlxfrcnni45el5kjuved8qczf310kpa7wzgfx1srf4lzjbwxmho4yirq7dthhxc3wp05ps3bqiyewtzpvyswwfchiadfb69eilsxny0wm8yqnxt2vj687v2ofn6lpd8h7d7al7fwhciyh0zv0ohwmmant0v1rov468n4xoqmug5i6qsby43j8hgwhs63spjdchweagdsv4k0li3o1d3qwbekx18jfy6o3r05ymkj9ytyndvup3uv7o67k6w7sknu0u361yiismix5fu1kf5wuktfa8whf6bokaj6dj3pt1ifriqtqqdmmqmlkz56m7549yttwbgbv8w3hj0xn894tgwbudgwoe64qldzpc1lmkz4dnzz3wvttozkf9mx8o2246hbgnsp3377q0rrwi1qo2 00:15:32.775 09:08:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:15:32.775 [2024-05-15 09:08:45.119008] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:32.775 [2024-05-15 09:08:45.119848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63289 ] 00:15:33.033 [2024-05-15 09:08:45.267985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.033 [2024-05-15 09:08:45.384164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.226  Copying: 511/511 [MB] (average 1094 MBps) 00:15:34.226 00:15:34.226 09:08:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:15:34.226 09:08:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:15:34.226 09:08:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:34.226 09:08:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 { 00:15:34.226 "subsystems": [ 00:15:34.226 { 00:15:34.226 "subsystem": "bdev", 00:15:34.226 "config": [ 00:15:34.226 { 00:15:34.226 "params": { 00:15:34.226 "block_size": 512, 00:15:34.226 "num_blocks": 1048576, 00:15:34.226 "name": "malloc0" 00:15:34.226 }, 00:15:34.226 "method": "bdev_malloc_create" 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "params": { 00:15:34.226 "filename": "/dev/zram1", 00:15:34.226 "name": "uring0" 00:15:34.226 }, 00:15:34.226 "method": "bdev_uring_create" 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "method": "bdev_wait_for_examine" 00:15:34.226 } 00:15:34.226 ] 00:15:34.226 } 00:15:34.226 ] 00:15:34.226 } 00:15:34.226 [2024-05-15 09:08:46.529280] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:34.226 [2024-05-15 09:08:46.529516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63305 ] 00:15:34.226 [2024-05-15 09:08:46.665155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.485 [2024-05-15 09:08:46.766950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.064  Copying: 238/512 [MB] (238 MBps) Copying: 512/512 [MB] (average 262 MBps) 00:15:37.064 00:15:37.064 09:08:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:15:37.064 09:08:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:15:37.064 09:08:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:37.064 09:08:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 { 00:15:37.064 "subsystems": [ 00:15:37.064 { 00:15:37.064 "subsystem": "bdev", 00:15:37.064 "config": [ 00:15:37.064 { 00:15:37.064 "params": { 00:15:37.064 "block_size": 512, 00:15:37.064 "num_blocks": 1048576, 00:15:37.064 "name": "malloc0" 00:15:37.064 }, 00:15:37.064 "method": "bdev_malloc_create" 00:15:37.064 }, 00:15:37.064 { 00:15:37.064 "params": { 00:15:37.064 "filename": "/dev/zram1", 00:15:37.064 "name": "uring0" 00:15:37.064 }, 00:15:37.064 "method": "bdev_uring_create" 00:15:37.064 }, 00:15:37.064 { 00:15:37.064 "method": "bdev_wait_for_examine" 00:15:37.064 } 00:15:37.064 ] 00:15:37.064 } 00:15:37.064 ] 00:15:37.064 } 00:15:37.064 [2024-05-15 09:08:49.362332] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:37.064 [2024-05-15 09:08:49.362619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63349 ] 00:15:37.064 [2024-05-15 09:08:49.504159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.323 [2024-05-15 09:08:49.641133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.215  Copying: 236/512 [MB] (236 MBps) Copying: 478/512 [MB] (242 MBps) Copying: 512/512 [MB] (average 239 MBps) 00:15:40.215 00:15:40.215 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:15:40.215 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ ct5539qr4yuvtybge0n7wyqc3rcgej6s71nvdi0usync4yqw42zc5up1497jyxgrnhrshpa5ua7dcmg1zyww5vji43kyth53303055w2w83k6y3ovu580yc3m7odsdsd9ai4p3shlt6oq736xrr7uklf18he78z432n43zz416toxbjjtwtz1mf3c4rszv7qqz1alhd8yoc3nutwq9mq5h7ltobqjyfb75ruo1b7d4pouno389muotugv686t3rlhcflg81sl9x5e476ylicyagn2h1dnz5i8xhkn9jo6vnghujs42ui7kkn88ymjx01qnol6hglmzzw8mv35nbiry2a7v3m85l4cm6iddnkod80qn9hkoxvu1qio67ofg6shnx0cgin1p7gafoitbyez2tu99fjnfqpfpqyrf9olsyj3djdv90v808clhh5ujig0zh2e8ejwh3g16jv3j8naxi8o12fpcgcolyshhno0wcpuffsyzqv9aep1tie7cw4rdj1m0yf85sge9yujw2zsa1t9um3l3pdf2u5jyxfzb6wscbvctpa53nybvzdooimo9pfib34mp5lj662aa7ents3573af7z9x7itp3ms05tlxfrcnni45el5kjuved8qczf310kpa7wzgfx1srf4lzjbwxmho4yirq7dthhxc3wp05ps3bqiyewtzpvyswwfchiadfb69eilsxny0wm8yqnxt2vj687v2ofn6lpd8h7d7al7fwhciyh0zv0ohwmmant0v1rov468n4xoqmug5i6qsby43j8hgwhs63spjdchweagdsv4k0li3o1d3qwbekx18jfy6o3r05ymkj9ytyndvup3uv7o67k6w7sknu0u361yiismix5fu1kf5wuktfa8whf6bokaj6dj3pt1ifriqtqqdmmqmlkz56m7549yttwbgbv8w3hj0xn894tgwbudgwoe64qldzpc1lmkz4dnzz3wvttozkf9mx8o2246hbgnsp3377q0rrwi1qo2 == \c\t\5\5\3\9\q\r\4\y\u\v\t\y\b\g\e\0\n\7\w\y\q\c\3\r\c\g\e\j\6\s\7\1\n\v\d\i\0\u\s\y\n\c\4\y\q\w\4\2\z\c\5\u\p\1\4\9\7\j\y\x\g\r\n\h\r\s\h\p\a\5\u\a\7\d\c\m\g\1\z\y\w\w\5\v\j\i\4\3\k\y\t\h\5\3\3\0\3\0\5\5\w\2\w\8\3\k\6\y\3\o\v\u\5\8\0\y\c\3\m\7\o\d\s\d\s\d\9\a\i\4\p\3\s\h\l\t\6\o\q\7\3\6\x\r\r\7\u\k\l\f\1\8\h\e\7\8\z\4\3\2\n\4\3\z\z\4\1\6\t\o\x\b\j\j\t\w\t\z\1\m\f\3\c\4\r\s\z\v\7\q\q\z\1\a\l\h\d\8\y\o\c\3\n\u\t\w\q\9\m\q\5\h\7\l\t\o\b\q\j\y\f\b\7\5\r\u\o\1\b\7\d\4\p\o\u\n\o\3\8\9\m\u\o\t\u\g\v\6\8\6\t\3\r\l\h\c\f\l\g\8\1\s\l\9\x\5\e\4\7\6\y\l\i\c\y\a\g\n\2\h\1\d\n\z\5\i\8\x\h\k\n\9\j\o\6\v\n\g\h\u\j\s\4\2\u\i\7\k\k\n\8\8\y\m\j\x\0\1\q\n\o\l\6\h\g\l\m\z\z\w\8\m\v\3\5\n\b\i\r\y\2\a\7\v\3\m\8\5\l\4\c\m\6\i\d\d\n\k\o\d\8\0\q\n\9\h\k\o\x\v\u\1\q\i\o\6\7\o\f\g\6\s\h\n\x\0\c\g\i\n\1\p\7\g\a\f\o\i\t\b\y\e\z\2\t\u\9\9\f\j\n\f\q\p\f\p\q\y\r\f\9\o\l\s\y\j\3\d\j\d\v\9\0\v\8\0\8\c\l\h\h\5\u\j\i\g\0\z\h\2\e\8\e\j\w\h\3\g\1\6\j\v\3\j\8\n\a\x\i\8\o\1\2\f\p\c\g\c\o\l\y\s\h\h\n\o\0\w\c\p\u\f\f\s\y\z\q\v\9\a\e\p\1\t\i\e\7\c\w\4\r\d\j\1\m\0\y\f\8\5\s\g\e\9\y\u\j\w\2\z\s\a\1\t\9\u\m\3\l\3\p\d\f\2\u\5\j\y\x\f\z\b\6\w\s\c\b\v\c\t\p\a\5\3\n\y\b\v\z\d\o\o\i\m\o\9\p\f\i\b\3\4\m\p\5\l\j\6\6\2\a\a\7\e\n\t\s\3\5\7\3\a\f\7\z\9\x\7\i\t\p\3\m\s\0\5\t\l\x\f\r\c\n\n\i\4\5\e\l\5\k\j\u\v\e\d\8\q\c\z\f\3\1\0\k\p\a\7\w\z\g\f\x\1\s\r\f\4\l\z\j\b\w\x\m\h\o\4\y\i\r\q\7\d\t\h\h\x\c\3\w\p\0\5\p\s\3\b\q\i\y\e\w\t\z\p\v\y\s\w\w\f\c\h\i\a\d\f\b\6\9\e\i\l\s\x\n\y\0\w\m\8\y\q\n\x\t\2\v\j\6\8\7\v\2\o\f\n\6\l\p\d\8\h\7\d\7\a\l\7\f\w\h\c\i\y\h\0\z\v\0\o\h\w\m\m\a\n\t\0\v\1\r\o\v\4\6\8\n\4\x\o\q\m\u\g\5\i\6\q\s\b\y\4\3\j\8\h\g\w\h\s\6\3\s\p\j\d\c\h\w\e\a\g\d\s\v\4\k\0\l\i\3\o\1\d\3\q\w\b\e\k\x\1\8\j\f\y\6\o\3\r\0\5\y\m\k\j\9\y\t\y\n\d\v\u\p\3\u\v\7\o\6\7\k\6\w\7\s\k\n\u\0\u\3\6\1\y\i\i\s\m\i\x\5\f\u\1\k\f\5\w\u\k\t\f\a\8\w\h\f\6\b\o\k\a\j\6\d\j\3\p\t\1\i\f\r\i\q\t\q\q\d\m\m\q\m\l\k\z\5\6\m\7\5\4\9\y\t\t\w\b\g\b\v\8\w\3\h\j\0\x\n\8\9\4\t\g\w\b\u\d\g\w\o\e\6\4\q\l\d\z\p\c\1\l\m\k\z\4\d\n\z\z\3\w\v\t\t\o\z\k\f\9\m\x\8\o\2\2\4\6\h\b\g\n\s\p\3\3\7\7\q\0\r\r\w\i\1\q\o\2 ]] 00:15:40.215 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:15:40.215 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ ct5539qr4yuvtybge0n7wyqc3rcgej6s71nvdi0usync4yqw42zc5up1497jyxgrnhrshpa5ua7dcmg1zyww5vji43kyth53303055w2w83k6y3ovu580yc3m7odsdsd9ai4p3shlt6oq736xrr7uklf18he78z432n43zz416toxbjjtwtz1mf3c4rszv7qqz1alhd8yoc3nutwq9mq5h7ltobqjyfb75ruo1b7d4pouno389muotugv686t3rlhcflg81sl9x5e476ylicyagn2h1dnz5i8xhkn9jo6vnghujs42ui7kkn88ymjx01qnol6hglmzzw8mv35nbiry2a7v3m85l4cm6iddnkod80qn9hkoxvu1qio67ofg6shnx0cgin1p7gafoitbyez2tu99fjnfqpfpqyrf9olsyj3djdv90v808clhh5ujig0zh2e8ejwh3g16jv3j8naxi8o12fpcgcolyshhno0wcpuffsyzqv9aep1tie7cw4rdj1m0yf85sge9yujw2zsa1t9um3l3pdf2u5jyxfzb6wscbvctpa53nybvzdooimo9pfib34mp5lj662aa7ents3573af7z9x7itp3ms05tlxfrcnni45el5kjuved8qczf310kpa7wzgfx1srf4lzjbwxmho4yirq7dthhxc3wp05ps3bqiyewtzpvyswwfchiadfb69eilsxny0wm8yqnxt2vj687v2ofn6lpd8h7d7al7fwhciyh0zv0ohwmmant0v1rov468n4xoqmug5i6qsby43j8hgwhs63spjdchweagdsv4k0li3o1d3qwbekx18jfy6o3r05ymkj9ytyndvup3uv7o67k6w7sknu0u361yiismix5fu1kf5wuktfa8whf6bokaj6dj3pt1ifriqtqqdmmqmlkz56m7549yttwbgbv8w3hj0xn894tgwbudgwoe64qldzpc1lmkz4dnzz3wvttozkf9mx8o2246hbgnsp3377q0rrwi1qo2 == \c\t\5\5\3\9\q\r\4\y\u\v\t\y\b\g\e\0\n\7\w\y\q\c\3\r\c\g\e\j\6\s\7\1\n\v\d\i\0\u\s\y\n\c\4\y\q\w\4\2\z\c\5\u\p\1\4\9\7\j\y\x\g\r\n\h\r\s\h\p\a\5\u\a\7\d\c\m\g\1\z\y\w\w\5\v\j\i\4\3\k\y\t\h\5\3\3\0\3\0\5\5\w\2\w\8\3\k\6\y\3\o\v\u\5\8\0\y\c\3\m\7\o\d\s\d\s\d\9\a\i\4\p\3\s\h\l\t\6\o\q\7\3\6\x\r\r\7\u\k\l\f\1\8\h\e\7\8\z\4\3\2\n\4\3\z\z\4\1\6\t\o\x\b\j\j\t\w\t\z\1\m\f\3\c\4\r\s\z\v\7\q\q\z\1\a\l\h\d\8\y\o\c\3\n\u\t\w\q\9\m\q\5\h\7\l\t\o\b\q\j\y\f\b\7\5\r\u\o\1\b\7\d\4\p\o\u\n\o\3\8\9\m\u\o\t\u\g\v\6\8\6\t\3\r\l\h\c\f\l\g\8\1\s\l\9\x\5\e\4\7\6\y\l\i\c\y\a\g\n\2\h\1\d\n\z\5\i\8\x\h\k\n\9\j\o\6\v\n\g\h\u\j\s\4\2\u\i\7\k\k\n\8\8\y\m\j\x\0\1\q\n\o\l\6\h\g\l\m\z\z\w\8\m\v\3\5\n\b\i\r\y\2\a\7\v\3\m\8\5\l\4\c\m\6\i\d\d\n\k\o\d\8\0\q\n\9\h\k\o\x\v\u\1\q\i\o\6\7\o\f\g\6\s\h\n\x\0\c\g\i\n\1\p\7\g\a\f\o\i\t\b\y\e\z\2\t\u\9\9\f\j\n\f\q\p\f\p\q\y\r\f\9\o\l\s\y\j\3\d\j\d\v\9\0\v\8\0\8\c\l\h\h\5\u\j\i\g\0\z\h\2\e\8\e\j\w\h\3\g\1\6\j\v\3\j\8\n\a\x\i\8\o\1\2\f\p\c\g\c\o\l\y\s\h\h\n\o\0\w\c\p\u\f\f\s\y\z\q\v\9\a\e\p\1\t\i\e\7\c\w\4\r\d\j\1\m\0\y\f\8\5\s\g\e\9\y\u\j\w\2\z\s\a\1\t\9\u\m\3\l\3\p\d\f\2\u\5\j\y\x\f\z\b\6\w\s\c\b\v\c\t\p\a\5\3\n\y\b\v\z\d\o\o\i\m\o\9\p\f\i\b\3\4\m\p\5\l\j\6\6\2\a\a\7\e\n\t\s\3\5\7\3\a\f\7\z\9\x\7\i\t\p\3\m\s\0\5\t\l\x\f\r\c\n\n\i\4\5\e\l\5\k\j\u\v\e\d\8\q\c\z\f\3\1\0\k\p\a\7\w\z\g\f\x\1\s\r\f\4\l\z\j\b\w\x\m\h\o\4\y\i\r\q\7\d\t\h\h\x\c\3\w\p\0\5\p\s\3\b\q\i\y\e\w\t\z\p\v\y\s\w\w\f\c\h\i\a\d\f\b\6\9\e\i\l\s\x\n\y\0\w\m\8\y\q\n\x\t\2\v\j\6\8\7\v\2\o\f\n\6\l\p\d\8\h\7\d\7\a\l\7\f\w\h\c\i\y\h\0\z\v\0\o\h\w\m\m\a\n\t\0\v\1\r\o\v\4\6\8\n\4\x\o\q\m\u\g\5\i\6\q\s\b\y\4\3\j\8\h\g\w\h\s\6\3\s\p\j\d\c\h\w\e\a\g\d\s\v\4\k\0\l\i\3\o\1\d\3\q\w\b\e\k\x\1\8\j\f\y\6\o\3\r\0\5\y\m\k\j\9\y\t\y\n\d\v\u\p\3\u\v\7\o\6\7\k\6\w\7\s\k\n\u\0\u\3\6\1\y\i\i\s\m\i\x\5\f\u\1\k\f\5\w\u\k\t\f\a\8\w\h\f\6\b\o\k\a\j\6\d\j\3\p\t\1\i\f\r\i\q\t\q\q\d\m\m\q\m\l\k\z\5\6\m\7\5\4\9\y\t\t\w\b\g\b\v\8\w\3\h\j\0\x\n\8\9\4\t\g\w\b\u\d\g\w\o\e\6\4\q\l\d\z\p\c\1\l\m\k\z\4\d\n\z\z\3\w\v\t\t\o\z\k\f\9\m\x\8\o\2\2\4\6\h\b\g\n\s\p\3\3\7\7\q\0\r\r\w\i\1\q\o\2 ]] 00:15:40.216 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:15:40.477 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:15:40.477 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:15:40.477 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:40.477 09:08:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:40.477 [2024-05-15 09:08:52.887132] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:40.477 [2024-05-15 09:08:52.887459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63398 ] 00:15:40.477 { 00:15:40.477 "subsystems": [ 00:15:40.477 { 00:15:40.477 "subsystem": "bdev", 00:15:40.477 "config": [ 00:15:40.477 { 00:15:40.477 "params": { 00:15:40.477 "block_size": 512, 00:15:40.477 "num_blocks": 1048576, 00:15:40.477 "name": "malloc0" 00:15:40.477 }, 00:15:40.477 "method": "bdev_malloc_create" 00:15:40.477 }, 00:15:40.477 { 00:15:40.477 "params": { 00:15:40.477 "filename": "/dev/zram1", 00:15:40.477 "name": "uring0" 00:15:40.477 }, 00:15:40.477 "method": "bdev_uring_create" 00:15:40.477 }, 00:15:40.477 { 00:15:40.477 "method": "bdev_wait_for_examine" 00:15:40.477 } 00:15:40.477 ] 00:15:40.477 } 00:15:40.477 ] 00:15:40.477 } 00:15:40.734 [2024-05-15 09:08:53.025869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.734 [2024-05-15 09:08:53.128693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.871  Copying: 214/512 [MB] (214 MBps) Copying: 424/512 [MB] (209 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:15:43.871 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:43.871 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:15:43.871 [2024-05-15 09:08:56.164659] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:43.871 [2024-05-15 09:08:56.164998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63450 ] 00:15:43.871 { 00:15:43.871 "subsystems": [ 00:15:43.871 { 00:15:43.871 "subsystem": "bdev", 00:15:43.871 "config": [ 00:15:43.871 { 00:15:43.871 "params": { 00:15:43.871 "block_size": 512, 00:15:43.871 "num_blocks": 1048576, 00:15:43.871 "name": "malloc0" 00:15:43.871 }, 00:15:43.871 "method": "bdev_malloc_create" 00:15:43.871 }, 00:15:43.871 { 00:15:43.871 "params": { 00:15:43.871 "filename": "/dev/zram1", 00:15:43.871 "name": "uring0" 00:15:43.871 }, 00:15:43.871 "method": "bdev_uring_create" 00:15:43.871 }, 00:15:43.871 { 00:15:43.871 "params": { 00:15:43.871 "name": "uring0" 00:15:43.871 }, 00:15:43.871 "method": "bdev_uring_delete" 00:15:43.871 }, 00:15:43.871 { 00:15:43.871 "method": "bdev_wait_for_examine" 00:15:43.871 } 00:15:43.871 ] 00:15:43.871 } 00:15:43.871 ] 00:15:43.871 } 00:15:43.871 [2024-05-15 09:08:56.301510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.130 [2024-05-15 09:08:56.405229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.388 [2024-05-15 09:08:56.602805] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:15:44.646  Copying: 0/0 [B] (average 0 Bps) 00:15:44.646 00:15:44.646 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:15:44.646 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:15:44.646 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # local es=0 00:15:44.646 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:15:44.646 09:08:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:44.646 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:15:44.646 [2024-05-15 09:08:57.046920] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:44.646 [2024-05-15 09:08:57.047238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63473 ] 00:15:44.646 { 00:15:44.646 "subsystems": [ 00:15:44.646 { 00:15:44.646 "subsystem": "bdev", 00:15:44.646 "config": [ 00:15:44.646 { 00:15:44.646 "params": { 00:15:44.646 "block_size": 512, 00:15:44.646 "num_blocks": 1048576, 00:15:44.646 "name": "malloc0" 00:15:44.646 }, 00:15:44.646 "method": "bdev_malloc_create" 00:15:44.646 }, 00:15:44.646 { 00:15:44.646 "params": { 00:15:44.646 "filename": "/dev/zram1", 00:15:44.646 "name": "uring0" 00:15:44.646 }, 00:15:44.646 "method": "bdev_uring_create" 00:15:44.646 }, 00:15:44.646 { 00:15:44.646 "params": { 00:15:44.646 "name": "uring0" 00:15:44.646 }, 00:15:44.646 "method": "bdev_uring_delete" 00:15:44.646 }, 00:15:44.646 { 00:15:44.646 "method": "bdev_wait_for_examine" 00:15:44.646 } 00:15:44.646 ] 00:15:44.646 } 00:15:44.646 ] 00:15:44.646 } 00:15:44.904 [2024-05-15 09:08:57.198693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.904 [2024-05-15 09:08:57.318221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.162 [2024-05-15 09:08:57.539948] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:15:45.162 [2024-05-15 09:08:57.564040] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:15:45.162 [2024-05-15 09:08:57.564267] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:15:45.162 [2024-05-15 09:08:57.564315] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:15:45.162 [2024-05-15 09:08:57.564418] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:45.421 [2024-05-15 09:08:57.821887] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # es=237 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # es=109 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # case "$es" in 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@669 -- # es=1 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:15:45.679 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:15:45.680 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:15:45.680 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:15:45.680 09:08:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:15:45.938 00:15:45.938 real 0m13.121s 00:15:45.938 user 0m8.281s 00:15:45.938 sys 0m11.164s 00:15:45.938 09:08:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:45.938 09:08:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:15:45.938 ************************************ 00:15:45.938 END TEST dd_uring_copy 00:15:45.938 ************************************ 00:15:45.938 ************************************ 00:15:45.938 END TEST spdk_dd_uring 00:15:45.938 ************************************ 00:15:45.938 00:15:45.938 real 0m13.272s 00:15:45.938 user 0m8.345s 00:15:45.938 sys 0m11.255s 00:15:45.938 09:08:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:45.938 09:08:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:15:45.938 09:08:58 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:15:45.938 09:08:58 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:45.938 09:08:58 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:45.938 09:08:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:45.938 ************************************ 00:15:45.938 START TEST spdk_dd_sparse 00:15:45.938 ************************************ 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:15:45.938 * Looking for test storage... 00:15:45.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.938 09:08:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:15:45.939 1+0 records in 00:15:45.939 1+0 records out 00:15:45.939 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00705804 s, 594 MB/s 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:15:45.939 1+0 records in 00:15:45.939 1+0 records out 00:15:45.939 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00769671 s, 545 MB/s 00:15:45.939 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:15:46.198 1+0 records in 00:15:46.198 1+0 records out 00:15:46.198 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00510325 s, 822 MB/s 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:46.198 ************************************ 00:15:46.198 START TEST dd_sparse_file_to_file 00:15:46.198 ************************************ 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # file_to_file 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:15:46.198 09:08:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:46.198 [2024-05-15 09:08:58.449643] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:46.198 [2024-05-15 09:08:58.449739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:15:46.198 { 00:15:46.198 "subsystems": [ 00:15:46.198 { 00:15:46.198 "subsystem": "bdev", 00:15:46.198 "config": [ 00:15:46.198 { 00:15:46.198 "params": { 00:15:46.198 "block_size": 4096, 00:15:46.198 "filename": "dd_sparse_aio_disk", 00:15:46.198 "name": "dd_aio" 00:15:46.198 }, 00:15:46.198 "method": "bdev_aio_create" 00:15:46.198 }, 00:15:46.198 { 00:15:46.198 "params": { 00:15:46.198 "lvs_name": "dd_lvstore", 00:15:46.198 "bdev_name": "dd_aio" 00:15:46.198 }, 00:15:46.198 "method": "bdev_lvol_create_lvstore" 00:15:46.198 }, 00:15:46.198 { 00:15:46.198 "method": "bdev_wait_for_examine" 00:15:46.198 } 00:15:46.198 ] 00:15:46.198 } 00:15:46.198 ] 00:15:46.198 } 00:15:46.198 [2024-05-15 09:08:58.591364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.457 [2024-05-15 09:08:58.708406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.716  Copying: 12/36 [MB] (average 923 MBps) 00:15:46.716 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:15:46.716 00:15:46.716 real 0m0.728s 00:15:46.716 user 0m0.462s 00:15:46.716 sys 0m0.338s 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:46.716 ************************************ 00:15:46.716 END TEST dd_sparse_file_to_file 00:15:46.716 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:46.716 ************************************ 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:46.975 ************************************ 00:15:46.975 START TEST dd_sparse_file_to_bdev 00:15:46.975 ************************************ 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # file_to_bdev 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:46.975 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:46.975 [2024-05-15 09:08:59.233300] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:46.975 [2024-05-15 09:08:59.233417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63613 ] 00:15:46.975 { 00:15:46.975 "subsystems": [ 00:15:46.975 { 00:15:46.975 "subsystem": "bdev", 00:15:46.975 "config": [ 00:15:46.975 { 00:15:46.975 "params": { 00:15:46.975 "block_size": 4096, 00:15:46.975 "filename": "dd_sparse_aio_disk", 00:15:46.975 "name": "dd_aio" 00:15:46.975 }, 00:15:46.975 "method": "bdev_aio_create" 00:15:46.975 }, 00:15:46.975 { 00:15:46.975 "params": { 00:15:46.975 "lvs_name": "dd_lvstore", 00:15:46.975 "lvol_name": "dd_lvol", 00:15:46.975 "size_in_mib": 36, 00:15:46.975 "thin_provision": true 00:15:46.975 }, 00:15:46.975 "method": "bdev_lvol_create" 00:15:46.975 }, 00:15:46.975 { 00:15:46.975 "method": "bdev_wait_for_examine" 00:15:46.975 } 00:15:46.975 ] 00:15:46.975 } 00:15:46.975 ] 00:15:46.975 } 00:15:46.975 [2024-05-15 09:08:59.375740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.235 [2024-05-15 09:08:59.476165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.493  Copying: 12/36 [MB] (average 500 MBps) 00:15:47.493 00:15:47.493 00:15:47.493 real 0m0.669s 00:15:47.493 user 0m0.429s 00:15:47.493 sys 0m0.330s 00:15:47.493 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:47.494 ************************************ 00:15:47.494 END TEST dd_sparse_file_to_bdev 00:15:47.494 ************************************ 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:47.494 ************************************ 00:15:47.494 START TEST dd_sparse_bdev_to_file 00:15:47.494 ************************************ 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # bdev_to_file 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:15:47.494 09:08:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:47.752 { 00:15:47.752 "subsystems": [ 00:15:47.752 { 00:15:47.752 "subsystem": "bdev", 00:15:47.752 "config": [ 00:15:47.752 { 00:15:47.752 "params": { 00:15:47.752 "block_size": 4096, 00:15:47.752 "filename": "dd_sparse_aio_disk", 00:15:47.752 "name": "dd_aio" 00:15:47.752 }, 00:15:47.752 "method": "bdev_aio_create" 00:15:47.752 }, 00:15:47.752 { 00:15:47.752 "method": "bdev_wait_for_examine" 00:15:47.752 } 00:15:47.752 ] 00:15:47.752 } 00:15:47.752 ] 00:15:47.752 } 00:15:47.752 [2024-05-15 09:08:59.983137] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:47.752 [2024-05-15 09:08:59.983267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63645 ] 00:15:47.752 [2024-05-15 09:09:00.135672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.011 [2024-05-15 09:09:00.240530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.269  Copying: 12/36 [MB] (average 857 MBps) 00:15:48.269 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:15:48.269 00:15:48.269 real 0m0.705s 00:15:48.269 user 0m0.446s 00:15:48.269 sys 0m0.335s 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:15:48.269 ************************************ 00:15:48.269 END TEST dd_sparse_bdev_to_file 00:15:48.269 ************************************ 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:15:48.269 00:15:48.269 real 0m2.448s 00:15:48.269 user 0m1.457s 00:15:48.269 sys 0m1.236s 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:48.269 09:09:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:15:48.269 ************************************ 00:15:48.269 END TEST spdk_dd_sparse 00:15:48.269 ************************************ 00:15:48.528 09:09:00 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:15:48.528 09:09:00 spdk_dd -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:48.528 09:09:00 spdk_dd -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:48.528 09:09:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:48.528 ************************************ 00:15:48.528 START TEST spdk_dd_negative 00:15:48.528 ************************************ 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:15:48.528 * Looking for test storage... 00:15:48.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:48.528 ************************************ 00:15:48.528 START TEST dd_invalid_arguments 00:15:48.528 ************************************ 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # invalid_arguments 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # local es=0 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:48.528 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:15:48.528 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:15:48.528 00:15:48.528 CPU options: 00:15:48.528 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:48.528 (like [0,1,10]) 00:15:48.528 --lcores lcore to CPU mapping list. The list is in the format: 00:15:48.528 [<,lcores[@CPUs]>...] 00:15:48.528 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:48.528 Within the group, '-' is used for range separator, 00:15:48.528 ',' is used for single number separator. 00:15:48.528 '( )' can be omitted for single element group, 00:15:48.528 '@' can be omitted if cpus and lcores have the same value 00:15:48.528 --disable-cpumask-locks Disable CPU core lock files. 00:15:48.528 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:48.528 pollers in the app support interrupt mode) 00:15:48.528 -p, --main-core main (primary) core for DPDK 00:15:48.528 00:15:48.528 Configuration options: 00:15:48.528 -c, --config, --json JSON config file 00:15:48.528 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:48.528 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:48.528 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:48.528 --rpcs-allowed comma-separated list of permitted RPCS 00:15:48.528 --json-ignore-init-errors don't exit on invalid config entry 00:15:48.528 00:15:48.528 Memory options: 00:15:48.528 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:48.528 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:48.528 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:48.528 -R, --huge-unlink unlink huge files after initialization 00:15:48.528 -n, --mem-channels number of memory channels used for DPDK 00:15:48.528 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:15:48.528 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:48.528 --no-huge run without using hugepages 00:15:48.528 -i, --shm-id shared memory ID (optional) 00:15:48.528 -g, --single-file-segments force creating just one hugetlbfs file 00:15:48.528 00:15:48.528 PCI options: 00:15:48.528 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:48.528 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:48.528 -u, --no-pci disable PCI access 00:15:48.528 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:48.528 00:15:48.528 Log options: 00:15:48.528 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:15:48.528 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:15:48.528 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:15:48.528 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:15:48.528 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:15:48.528 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:15:48.528 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:15:48.528 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:15:48.528 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:15:48.528 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:15:48.528 virtio_vfio_user, vmd) 00:15:48.528 --silence-noticelog disable notice level logging to stderr 00:15:48.528 00:15:48.528 Trace options: 00:15:48.528 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:48.528 setting 0 to disable trace (default 32768) 00:15:48.528 Tracepoints vary in size and can use more than one trace entry. 00:15:48.528 -e, --tpoint-group [:] 00:15:48.528 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:15:48.528 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:15:48.528 [2024-05-15 09:09:00.908898] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:15:48.528 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:15:48.529 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:48.529 a tracepoint group. First tpoint inside a group can be enabled by 00:15:48.529 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:48.529 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:48.529 in /include/spdk_internal/trace_defs.h 00:15:48.529 00:15:48.529 Other options: 00:15:48.529 -h, --help show this usage 00:15:48.529 -v, --version print SPDK version 00:15:48.529 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:48.529 --env-context Opaque context for use of the env implementation 00:15:48.529 00:15:48.529 Application specific: 00:15:48.529 [--------- DD Options ---------] 00:15:48.529 --if Input file. Must specify either --if or --ib. 00:15:48.529 --ib Input bdev. Must specifier either --if or --ib 00:15:48.529 --of Output file. Must specify either --of or --ob. 00:15:48.529 --ob Output bdev. Must specify either --of or --ob. 00:15:48.529 --iflag Input file flags. 00:15:48.529 --oflag Output file flags. 00:15:48.529 --bs I/O unit size (default: 4096) 00:15:48.529 --qd Queue depth (default: 2) 00:15:48.529 --count I/O unit count. The number of I/O units to copy. (default: all) 00:15:48.529 --skip Skip this many I/O units at start of input. (default: 0) 00:15:48.529 --seek Skip this many I/O units at start of output. (default: 0) 00:15:48.529 --aio Force usage of AIO. (by default io_uring is used if available) 00:15:48.529 --sparse Enable hole skipping in input target 00:15:48.529 Available iflag and oflag values: 00:15:48.529 append - append mode 00:15:48.529 direct - use direct I/O for data 00:15:48.529 directory - fail unless a directory 00:15:48.529 dsync - use synchronized I/O for data 00:15:48.529 noatime - do not update access time 00:15:48.529 noctty - do not assign controlling terminal from file 00:15:48.529 nofollow - do not follow symlinks 00:15:48.529 nonblock - use non-blocking I/O 00:15:48.529 sync - use synchronized I/O for data and metadata 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # es=2 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:48.529 00:15:48.529 real 0m0.058s 00:15:48.529 user 0m0.032s 00:15:48.529 sys 0m0.026s 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:48.529 ************************************ 00:15:48.529 END TEST dd_invalid_arguments 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:15:48.529 ************************************ 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:48.529 09:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:48.787 ************************************ 00:15:48.787 START TEST dd_double_input 00:15:48.787 ************************************ 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # double_input 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # local es=0 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:48.787 09:09:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:15:48.787 [2024-05-15 09:09:01.027094] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:15:48.787 09:09:01 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # es=22 00:15:48.787 09:09:01 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:48.787 09:09:01 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:48.787 09:09:01 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:48.787 00:15:48.787 real 0m0.062s 00:15:48.787 user 0m0.038s 00:15:48.787 sys 0m0.024s 00:15:48.787 09:09:01 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:48.787 09:09:01 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 ************************************ 00:15:48.788 END TEST dd_double_input 00:15:48.788 ************************************ 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 ************************************ 00:15:48.788 START TEST dd_double_output 00:15:48.788 ************************************ 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # double_output 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # local es=0 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:15:48.788 [2024-05-15 09:09:01.151426] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # es=22 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:48.788 00:15:48.788 real 0m0.080s 00:15:48.788 user 0m0.049s 00:15:48.788 sys 0m0.027s 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:48.788 ************************************ 00:15:48.788 END TEST dd_double_output 00:15:48.788 ************************************ 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:48.788 ************************************ 00:15:48.788 START TEST dd_no_input 00:15:48.788 ************************************ 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # no_input 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # local es=0 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:48.788 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:15:49.045 [2024-05-15 09:09:01.279530] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:15:49.045 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # es=22 00:15:49.045 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:49.045 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:49.046 00:15:49.046 real 0m0.075s 00:15:49.046 user 0m0.046s 00:15:49.046 sys 0m0.028s 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:15:49.046 ************************************ 00:15:49.046 END TEST dd_no_input 00:15:49.046 ************************************ 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:49.046 ************************************ 00:15:49.046 START TEST dd_no_output 00:15:49.046 ************************************ 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # no_output 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # local es=0 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:49.046 [2024-05-15 09:09:01.411154] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # es=22 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:49.046 00:15:49.046 real 0m0.078s 00:15:49.046 user 0m0.044s 00:15:49.046 sys 0m0.033s 00:15:49.046 ************************************ 00:15:49.046 END TEST dd_no_output 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:15:49.046 ************************************ 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:49.046 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:49.304 ************************************ 00:15:49.304 START TEST dd_wrong_blocksize 00:15:49.304 ************************************ 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # wrong_blocksize 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # local es=0 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:15:49.304 [2024-05-15 09:09:01.535411] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # es=22 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:49.304 00:15:49.304 real 0m0.057s 00:15:49.304 user 0m0.032s 00:15:49.304 sys 0m0.024s 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:15:49.304 ************************************ 00:15:49.304 END TEST dd_wrong_blocksize 00:15:49.304 ************************************ 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:49.304 ************************************ 00:15:49.304 START TEST dd_smaller_blocksize 00:15:49.304 ************************************ 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # smaller_blocksize 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # local es=0 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.304 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.305 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.305 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:49.305 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:49.305 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:49.305 09:09:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:15:49.305 [2024-05-15 09:09:01.673677] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:49.305 [2024-05-15 09:09:01.673794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63864 ] 00:15:49.563 [2024-05-15 09:09:01.812642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.563 [2024-05-15 09:09:01.915179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.822 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:15:50.080 [2024-05-15 09:09:02.272245] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:15:50.080 [2024-05-15 09:09:02.272302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:50.080 [2024-05-15 09:09:02.370933] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # es=244 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # es=116 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # case "$es" in 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@669 -- # es=1 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:50.080 00:15:50.080 real 0m0.871s 00:15:50.080 user 0m0.420s 00:15:50.080 sys 0m0.345s 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:50.080 ************************************ 00:15:50.080 END TEST dd_smaller_blocksize 00:15:50.080 ************************************ 00:15:50.080 09:09:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 ************************************ 00:15:50.340 START TEST dd_invalid_count 00:15:50.340 ************************************ 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # invalid_count 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # local es=0 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:15:50.340 [2024-05-15 09:09:02.605662] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # es=22 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:50.340 00:15:50.340 real 0m0.074s 00:15:50.340 user 0m0.045s 00:15:50.340 sys 0m0.028s 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 ************************************ 00:15:50.340 END TEST dd_invalid_count 00:15:50.340 ************************************ 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 ************************************ 00:15:50.340 START TEST dd_invalid_oflag 00:15:50.340 ************************************ 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # invalid_oflag 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # local es=0 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:15:50.340 [2024-05-15 09:09:02.746498] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # es=22 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:50.340 00:15:50.340 real 0m0.075s 00:15:50.340 user 0m0.051s 00:15:50.340 sys 0m0.023s 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:50.340 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 ************************************ 00:15:50.340 END TEST dd_invalid_oflag 00:15:50.340 ************************************ 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:50.600 ************************************ 00:15:50.600 START TEST dd_invalid_iflag 00:15:50.600 ************************************ 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # invalid_iflag 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # local es=0 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:15:50.600 [2024-05-15 09:09:02.883927] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # es=22 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:50.600 00:15:50.600 real 0m0.074s 00:15:50.600 user 0m0.039s 00:15:50.600 sys 0m0.034s 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:15:50.600 ************************************ 00:15:50.600 END TEST dd_invalid_iflag 00:15:50.600 ************************************ 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:50.600 ************************************ 00:15:50.600 START TEST dd_unknown_flag 00:15:50.600 ************************************ 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # unknown_flag 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # local es=0 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:50.600 09:09:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:15:50.600 [2024-05-15 09:09:03.023904] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:50.600 [2024-05-15 09:09:03.024020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63961 ] 00:15:50.860 [2024-05-15 09:09:03.167092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.860 [2024-05-15 09:09:03.285814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.121 [2024-05-15 09:09:03.362658] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:15:51.121 [2024-05-15 09:09:03.362724] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.121 [2024-05-15 09:09:03.362790] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:15:51.121 [2024-05-15 09:09:03.362801] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.121 [2024-05-15 09:09:03.363023] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:15:51.121 [2024-05-15 09:09:03.363037] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.121 [2024-05-15 09:09:03.363084] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:15:51.121 [2024-05-15 09:09:03.363093] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:15:51.121 [2024-05-15 09:09:03.459882] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:51.380 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # es=234 00:15:51.380 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.380 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # es=106 00:15:51.380 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # case "$es" in 00:15:51.380 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@669 -- # es=1 00:15:51.380 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.380 00:15:51.381 real 0m0.616s 00:15:51.381 user 0m0.363s 00:15:51.381 sys 0m0.168s 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:15:51.381 ************************************ 00:15:51.381 END TEST dd_unknown_flag 00:15:51.381 ************************************ 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:51.381 ************************************ 00:15:51.381 START TEST dd_invalid_json 00:15:51.381 ************************************ 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # invalid_json 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # local es=0 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:51.381 09:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:15:51.381 [2024-05-15 09:09:03.690914] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:51.381 [2024-05-15 09:09:03.690995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63990 ] 00:15:51.381 [2024-05-15 09:09:03.824781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.639 [2024-05-15 09:09:03.943226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.639 [2024-05-15 09:09:03.943291] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:15:51.639 [2024-05-15 09:09:03.943305] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:51.639 [2024-05-15 09:09:03.943314] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.639 [2024-05-15 09:09:03.943346] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # es=234 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # es=106 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # case "$es" in 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@669 -- # es=1 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.639 00:15:51.639 real 0m0.423s 00:15:51.639 user 0m0.232s 00:15:51.639 sys 0m0.075s 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:51.639 09:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:15:51.639 ************************************ 00:15:51.639 END TEST dd_invalid_json 00:15:51.639 ************************************ 00:15:51.897 00:15:51.897 real 0m3.360s 00:15:51.897 user 0m1.666s 00:15:51.897 sys 0m1.363s 00:15:51.897 09:09:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:51.897 09:09:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:15:51.897 ************************************ 00:15:51.897 END TEST spdk_dd_negative 00:15:51.897 ************************************ 00:15:51.897 00:15:51.897 real 1m15.519s 00:15:51.897 user 0m48.372s 00:15:51.897 sys 0m30.982s 00:15:51.897 09:09:04 spdk_dd -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:51.897 09:09:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:51.897 ************************************ 00:15:51.897 END TEST spdk_dd 00:15:51.897 ************************************ 00:15:51.897 09:09:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:51.897 09:09:04 -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:51.897 09:09:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.897 09:09:04 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:15:51.897 09:09:04 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:15:51.897 09:09:04 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:51.897 09:09:04 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:51.897 09:09:04 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:51.897 09:09:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.897 ************************************ 00:15:51.897 START TEST nvmf_tcp 00:15:51.897 ************************************ 00:15:51.897 09:09:04 nvmf_tcp -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:52.156 * Looking for test storage... 00:15:52.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.156 09:09:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.157 09:09:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.157 09:09:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.157 09:09:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.157 09:09:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:15:52.157 09:09:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:15:52.157 09:09:04 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:52.157 09:09:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:15:52.157 09:09:04 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:52.157 09:09:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:52.157 09:09:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:52.157 09:09:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:52.157 ************************************ 00:15:52.157 START TEST nvmf_host_management 00:15:52.157 ************************************ 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:52.157 * Looking for test storage... 00:15:52.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.157 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:52.158 Cannot find device "nvmf_init_br" 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:52.158 Cannot find device "nvmf_tgt_br" 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.158 Cannot find device "nvmf_tgt_br2" 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:52.158 Cannot find device "nvmf_init_br" 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:52.158 Cannot find device "nvmf_tgt_br" 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:15:52.158 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:52.416 Cannot find device "nvmf_tgt_br2" 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:52.416 Cannot find device "nvmf_br" 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:52.416 Cannot find device "nvmf_init_if" 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:52.416 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:52.417 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.417 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.417 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.417 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:52.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:15:52.675 00:15:52.675 --- 10.0.0.2 ping statistics --- 00:15:52.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.675 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:52.675 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.675 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:52.675 00:15:52.675 --- 10.0.0.3 ping statistics --- 00:15:52.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.675 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:52.675 00:15:52.675 --- 10.0.0.1 ping statistics --- 00:15:52.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.675 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:52.675 09:09:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64254 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64254 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 64254 ']' 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:52.675 09:09:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:52.675 [2024-05-15 09:09:05.064537] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:52.675 [2024-05-15 09:09:05.064891] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.934 [2024-05-15 09:09:05.226655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.934 [2024-05-15 09:09:05.352873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.934 [2024-05-15 09:09:05.353172] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.934 [2024-05-15 09:09:05.353361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.934 [2024-05-15 09:09:05.353478] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.934 [2024-05-15 09:09:05.353527] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.934 [2024-05-15 09:09:05.353796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.934 [2024-05-15 09:09:05.354029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.934 [2024-05-15 09:09:05.354208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:52.934 [2024-05-15 09:09:05.354214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.910 [2024-05-15 09:09:06.110957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.910 Malloc0 00:15:53.910 [2024-05-15 09:09:06.185583] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:53.910 [2024-05-15 09:09:06.186178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64308 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64308 /var/tmp/bdevperf.sock 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 64308 ']' 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:53.910 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:53.911 { 00:15:53.911 "params": { 00:15:53.911 "name": "Nvme$subsystem", 00:15:53.911 "trtype": "$TEST_TRANSPORT", 00:15:53.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:53.911 "adrfam": "ipv4", 00:15:53.911 "trsvcid": "$NVMF_PORT", 00:15:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:53.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:53.911 "hdgst": ${hdgst:-false}, 00:15:53.911 "ddgst": ${ddgst:-false} 00:15:53.911 }, 00:15:53.911 "method": "bdev_nvme_attach_controller" 00:15:53.911 } 00:15:53.911 EOF 00:15:53.911 )") 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:53.911 09:09:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:53.911 "params": { 00:15:53.911 "name": "Nvme0", 00:15:53.911 "trtype": "tcp", 00:15:53.911 "traddr": "10.0.0.2", 00:15:53.911 "adrfam": "ipv4", 00:15:53.911 "trsvcid": "4420", 00:15:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:53.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:53.911 "hdgst": false, 00:15:53.911 "ddgst": false 00:15:53.911 }, 00:15:53.911 "method": "bdev_nvme_attach_controller" 00:15:53.911 }' 00:15:53.911 [2024-05-15 09:09:06.294173] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:53.911 [2024-05-15 09:09:06.294825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64308 ] 00:15:54.170 [2024-05-15 09:09:06.437112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.170 [2024-05-15 09:09:06.560522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.428 Running I/O for 10 seconds... 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.997 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.997 [2024-05-15 09:09:07.348159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.348419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.348593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.348781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.348864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.348918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.348972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.349093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.349160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.349221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.349370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.349654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.349783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.349925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.350063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.997 [2024-05-15 09:09:07.350191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.997 [2024-05-15 09:09:07.350344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.350478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.350597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.350726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.350838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.350945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.998 [2024-05-15 09:09:07.351688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:54.998 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.998 [2024-05-15 09:09:07.351880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.998 [2024-05-15 09:09:07.351974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.351985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.351997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.998 [2024-05-15 09:09:07.352615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.998 [2024-05-15 09:09:07.352627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.352986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.999 [2024-05-15 09:09:07.352996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.353008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ea00 is same with the state(5) to be set 00:15:54.999 [2024-05-15 09:09:07.353080] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x146ea00 was disconnected and freed. reset controller. 00:15:54.999 [2024-05-15 09:09:07.353181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.999 [2024-05-15 09:09:07.353261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.353334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.999 [2024-05-15 09:09:07.353396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.353448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.999 [2024-05-15 09:09:07.353594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.353713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.999 [2024-05-15 09:09:07.353770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.999 [2024-05-15 09:09:07.353838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f0d0 is same with the state(5) to be set 00:15:54.999 [2024-05-15 09:09:07.354974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:54.999 task offset: 122752 on job bdev=Nvme0n1 fails 00:15:54.999 00:15:54.999 Latency(us) 00:15:54.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.999 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:54.999 Job: Nvme0n1 ended in about 0.61 seconds with error 00:15:54.999 Verification LBA range: start 0x0 length 0x400 00:15:54.999 Nvme0n1 : 0.61 1462.71 91.42 104.48 0.00 39557.09 9362.29 42941.68 00:15:54.999 =================================================================================================================== 00:15:54.999 Total : 1462.71 91.42 104.48 0.00 39557.09 9362.29 42941.68 00:15:54.999 [2024-05-15 09:09:07.357451] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:54.999 [2024-05-15 09:09:07.357582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146f0d0 (9): Bad file descriptor 00:15:54.999 09:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.999 09:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:54.999 [2024-05-15 09:09:07.368335] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:55.933 09:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64308 00:15:55.933 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64308) - No such process 00:15:55.933 09:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:55.934 { 00:15:55.934 "params": { 00:15:55.934 "name": "Nvme$subsystem", 00:15:55.934 "trtype": "$TEST_TRANSPORT", 00:15:55.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:55.934 "adrfam": "ipv4", 00:15:55.934 "trsvcid": "$NVMF_PORT", 00:15:55.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:55.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:55.934 "hdgst": ${hdgst:-false}, 00:15:55.934 "ddgst": ${ddgst:-false} 00:15:55.934 }, 00:15:55.934 "method": "bdev_nvme_attach_controller" 00:15:55.934 } 00:15:55.934 EOF 00:15:55.934 )") 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:55.934 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:56.245 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:56.245 09:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.245 "params": { 00:15:56.245 "name": "Nvme0", 00:15:56.245 "trtype": "tcp", 00:15:56.245 "traddr": "10.0.0.2", 00:15:56.245 "adrfam": "ipv4", 00:15:56.245 "trsvcid": "4420", 00:15:56.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:56.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:56.245 "hdgst": false, 00:15:56.245 "ddgst": false 00:15:56.245 }, 00:15:56.245 "method": "bdev_nvme_attach_controller" 00:15:56.245 }' 00:15:56.245 [2024-05-15 09:09:08.424410] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:56.245 [2024-05-15 09:09:08.424797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64352 ] 00:15:56.245 [2024-05-15 09:09:08.572835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.502 [2024-05-15 09:09:08.698402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.502 Running I/O for 1 seconds... 00:15:57.876 00:15:57.876 Latency(us) 00:15:57.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.876 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:57.876 Verification LBA range: start 0x0 length 0x400 00:15:57.876 Nvme0n1 : 1.03 1731.98 108.25 0.00 0.00 36300.65 4056.99 34203.55 00:15:57.876 =================================================================================================================== 00:15:57.876 Total : 1731.98 108.25 0.00 0.00 36300.65 4056.99 34203.55 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.876 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.134 rmmod nvme_tcp 00:15:58.134 rmmod nvme_fabrics 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64254 ']' 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64254 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 64254 ']' 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 64254 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 64254 00:15:58.134 killing process with pid 64254 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 64254' 00:15:58.134 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 64254 00:15:58.135 [2024-05-15 09:09:10.406263] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:58.135 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 64254 00:15:58.393 [2024-05-15 09:09:10.626872] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:58.393 00:15:58.393 real 0m6.294s 00:15:58.393 user 0m23.713s 00:15:58.393 sys 0m1.709s 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:58.393 09:09:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.393 ************************************ 00:15:58.393 END TEST nvmf_host_management 00:15:58.393 ************************************ 00:15:58.393 09:09:10 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:58.393 09:09:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:58.393 09:09:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:58.393 09:09:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.393 ************************************ 00:15:58.393 START TEST nvmf_lvol 00:15:58.393 ************************************ 00:15:58.393 09:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:58.393 * Looking for test storage... 00:15:58.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.651 09:09:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:58.652 Cannot find device "nvmf_tgt_br" 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.652 Cannot find device "nvmf_tgt_br2" 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:58.652 Cannot find device "nvmf_tgt_br" 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:58.652 Cannot find device "nvmf_tgt_br2" 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:58.652 09:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:58.652 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.925 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:58.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:15:58.925 00:15:58.925 --- 10.0.0.2 ping statistics --- 00:15:58.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.925 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:58.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:58.926 00:15:58.926 --- 10.0.0.3 ping statistics --- 00:15:58.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.926 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:58.926 00:15:58.926 --- 10.0.0.1 ping statistics --- 00:15:58.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.926 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64564 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64564 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 64564 ']' 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:58.926 09:09:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:58.926 [2024-05-15 09:09:11.313736] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:15:58.926 [2024-05-15 09:09:11.314020] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.184 [2024-05-15 09:09:11.475214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:59.184 [2024-05-15 09:09:11.593930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.184 [2024-05-15 09:09:11.594452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.184 [2024-05-15 09:09:11.594852] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.184 [2024-05-15 09:09:11.595146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.184 [2024-05-15 09:09:11.595572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.184 [2024-05-15 09:09:11.596430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.184 [2024-05-15 09:09:11.596634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.184 [2024-05-15 09:09:11.596625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.164 09:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:00.422 [2024-05-15 09:09:12.685280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.422 09:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.679 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:00.679 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.949 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:00.949 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:01.229 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:01.487 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9e56d675-283b-4679-bd56-814bc5d371c6 00:16:01.487 09:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9e56d675-283b-4679-bd56-814bc5d371c6 lvol 20 00:16:01.745 09:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=799385de-5c2e-4678-a3d8-fb6c66dd7da5 00:16:01.745 09:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:02.003 09:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 799385de-5c2e-4678-a3d8-fb6c66dd7da5 00:16:02.261 09:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:02.519 [2024-05-15 09:09:14.816709] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:02.519 [2024-05-15 09:09:14.817246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.519 09:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:02.777 09:09:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64645 00:16:02.777 09:09:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:02.777 09:09:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:03.749 09:09:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 799385de-5c2e-4678-a3d8-fb6c66dd7da5 MY_SNAPSHOT 00:16:04.008 09:09:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1b1c55e8-666d-4f7f-a8c9-6600556a86d5 00:16:04.008 09:09:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 799385de-5c2e-4678-a3d8-fb6c66dd7da5 30 00:16:04.591 09:09:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1b1c55e8-666d-4f7f-a8c9-6600556a86d5 MY_CLONE 00:16:04.849 09:09:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ec9b7536-03ca-4063-9bcb-2d050f14fd7f 00:16:04.849 09:09:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ec9b7536-03ca-4063-9bcb-2d050f14fd7f 00:16:05.107 09:09:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64645 00:16:13.284 Initializing NVMe Controllers 00:16:13.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:13.284 Controller IO queue size 128, less than required. 00:16:13.284 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:13.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:13.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:13.284 Initialization complete. Launching workers. 00:16:13.284 ======================================================== 00:16:13.284 Latency(us) 00:16:13.284 Device Information : IOPS MiB/s Average min max 00:16:13.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9811.09 38.32 13053.05 2225.54 71577.63 00:16:13.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10155.09 39.67 12614.44 796.23 90820.44 00:16:13.284 ======================================================== 00:16:13.284 Total : 19966.18 77.99 12829.97 796.23 90820.44 00:16:13.284 00:16:13.284 09:09:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:13.284 09:09:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 799385de-5c2e-4678-a3d8-fb6c66dd7da5 00:16:13.872 09:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e56d675-283b-4679-bd56-814bc5d371c6 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.130 rmmod nvme_tcp 00:16:14.130 rmmod nvme_fabrics 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64564 ']' 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64564 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 64564 ']' 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 64564 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:14.130 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 64564 00:16:14.130 killing process with pid 64564 00:16:14.131 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:14.131 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:14.131 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 64564' 00:16:14.131 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 64564 00:16:14.131 [2024-05-15 09:09:26.542594] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:14.131 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 64564 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.389 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.647 09:09:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:14.647 ************************************ 00:16:14.647 END TEST nvmf_lvol 00:16:14.647 ************************************ 00:16:14.647 00:16:14.647 real 0m16.086s 00:16:14.647 user 1m4.940s 00:16:14.647 sys 0m6.023s 00:16:14.647 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:14.647 09:09:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:14.647 09:09:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:14.647 09:09:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:14.647 09:09:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:14.647 09:09:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.647 ************************************ 00:16:14.647 START TEST nvmf_lvs_grow 00:16:14.647 ************************************ 00:16:14.647 09:09:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:14.647 * Looking for test storage... 00:16:14.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:14.647 09:09:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.647 09:09:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:14.648 Cannot find device "nvmf_tgt_br" 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.648 Cannot find device "nvmf_tgt_br2" 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:14.648 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:14.906 Cannot find device "nvmf_tgt_br" 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:14.906 Cannot find device "nvmf_tgt_br2" 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.906 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:15.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:15.164 00:16:15.164 --- 10.0.0.2 ping statistics --- 00:16:15.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.164 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:15.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:15.164 00:16:15.164 --- 10.0.0.3 ping statistics --- 00:16:15.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.164 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:15.164 00:16:15.164 --- 10.0.0.1 ping statistics --- 00:16:15.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.164 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64969 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64969 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 64969 ']' 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.164 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:15.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.165 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.165 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:15.165 09:09:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.165 [2024-05-15 09:09:27.466836] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:15.165 [2024-05-15 09:09:27.466916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.423 [2024-05-15 09:09:27.611637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.423 [2024-05-15 09:09:27.729656] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.423 [2024-05-15 09:09:27.729720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.423 [2024-05-15 09:09:27.729735] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.423 [2024-05-15 09:09:27.729748] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.423 [2024-05-15 09:09:27.729759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.423 [2024-05-15 09:09:27.729793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.990 09:09:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:16.600 [2024-05-15 09:09:28.715090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:16.600 ************************************ 00:16:16.600 START TEST lvs_grow_clean 00:16:16.600 ************************************ 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:16.600 09:09:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:16.859 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:16.859 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:16.859 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=14379913-119c-41d2-977d-fedd8c623a1d 00:16:16.859 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:16.859 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:17.424 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:17.424 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:17.424 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14379913-119c-41d2-977d-fedd8c623a1d lvol 150 00:16:17.424 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b61b999-5214-4fe5-ae4e-62cfcd8cdaab 00:16:17.424 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:17.424 09:09:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:17.683 [2024-05-15 09:09:30.109238] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:17.683 [2024-05-15 09:09:30.109319] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:17.683 true 00:16:17.942 09:09:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:17.942 09:09:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:17.942 09:09:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:17.942 09:09:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:18.201 09:09:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b61b999-5214-4fe5-ae4e-62cfcd8cdaab 00:16:18.461 09:09:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:18.720 [2024-05-15 09:09:31.153748] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:18.720 [2024-05-15 09:09:31.154015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.007 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65053 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65053 /var/tmp/bdevperf.sock 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 65053 ']' 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:19.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:19.266 09:09:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:19.266 [2024-05-15 09:09:31.525150] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:19.266 [2024-05-15 09:09:31.525426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65053 ] 00:16:19.266 [2024-05-15 09:09:31.657125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.524 [2024-05-15 09:09:31.760591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.091 09:09:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:20.091 09:09:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:16:20.091 09:09:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:20.350 Nvme0n1 00:16:20.350 09:09:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:20.609 [ 00:16:20.609 { 00:16:20.609 "name": "Nvme0n1", 00:16:20.609 "aliases": [ 00:16:20.609 "6b61b999-5214-4fe5-ae4e-62cfcd8cdaab" 00:16:20.609 ], 00:16:20.609 "product_name": "NVMe disk", 00:16:20.609 "block_size": 4096, 00:16:20.610 "num_blocks": 38912, 00:16:20.610 "uuid": "6b61b999-5214-4fe5-ae4e-62cfcd8cdaab", 00:16:20.610 "assigned_rate_limits": { 00:16:20.610 "rw_ios_per_sec": 0, 00:16:20.610 "rw_mbytes_per_sec": 0, 00:16:20.610 "r_mbytes_per_sec": 0, 00:16:20.610 "w_mbytes_per_sec": 0 00:16:20.610 }, 00:16:20.610 "claimed": false, 00:16:20.610 "zoned": false, 00:16:20.610 "supported_io_types": { 00:16:20.610 "read": true, 00:16:20.610 "write": true, 00:16:20.610 "unmap": true, 00:16:20.610 "write_zeroes": true, 00:16:20.610 "flush": true, 00:16:20.610 "reset": true, 00:16:20.610 "compare": true, 00:16:20.610 "compare_and_write": true, 00:16:20.610 "abort": true, 00:16:20.610 "nvme_admin": true, 00:16:20.610 "nvme_io": true 00:16:20.610 }, 00:16:20.610 "memory_domains": [ 00:16:20.610 { 00:16:20.610 "dma_device_id": "system", 00:16:20.610 "dma_device_type": 1 00:16:20.610 } 00:16:20.610 ], 00:16:20.610 "driver_specific": { 00:16:20.610 "nvme": [ 00:16:20.610 { 00:16:20.610 "trid": { 00:16:20.610 "trtype": "TCP", 00:16:20.610 "adrfam": "IPv4", 00:16:20.610 "traddr": "10.0.0.2", 00:16:20.610 "trsvcid": "4420", 00:16:20.610 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:20.610 }, 00:16:20.610 "ctrlr_data": { 00:16:20.610 "cntlid": 1, 00:16:20.610 "vendor_id": "0x8086", 00:16:20.610 "model_number": "SPDK bdev Controller", 00:16:20.610 "serial_number": "SPDK0", 00:16:20.610 "firmware_revision": "24.05", 00:16:20.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:20.610 "oacs": { 00:16:20.610 "security": 0, 00:16:20.610 "format": 0, 00:16:20.610 "firmware": 0, 00:16:20.610 "ns_manage": 0 00:16:20.610 }, 00:16:20.610 "multi_ctrlr": true, 00:16:20.610 "ana_reporting": false 00:16:20.610 }, 00:16:20.610 "vs": { 00:16:20.610 "nvme_version": "1.3" 00:16:20.610 }, 00:16:20.610 "ns_data": { 00:16:20.610 "id": 1, 00:16:20.610 "can_share": true 00:16:20.610 } 00:16:20.610 } 00:16:20.610 ], 00:16:20.610 "mp_policy": "active_passive" 00:16:20.610 } 00:16:20.610 } 00:16:20.610 ] 00:16:20.610 09:09:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65082 00:16:20.610 09:09:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:20.610 09:09:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:20.868 Running I/O for 10 seconds... 00:16:21.833 Latency(us) 00:16:21.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.833 Nvme0n1 : 1.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:16:21.833 =================================================================================================================== 00:16:21.833 Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:16:21.833 00:16:22.766 09:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:22.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.766 Nvme0n1 : 2.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:16:22.766 =================================================================================================================== 00:16:22.767 Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:16:22.767 00:16:23.024 true 00:16:23.024 09:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:23.024 09:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:23.283 09:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:23.283 09:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:23.283 09:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65082 00:16:23.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.850 Nvme0n1 : 3.00 10837.33 42.33 0.00 0.00 0.00 0.00 0.00 00:16:23.850 =================================================================================================================== 00:16:23.850 Total : 10837.33 42.33 0.00 0.00 0.00 0.00 0.00 00:16:23.850 00:16:24.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.787 Nvme0n1 : 4.00 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:16:24.787 =================================================================================================================== 00:16:24.787 Total : 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:16:24.787 00:16:25.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.720 Nvme0n1 : 5.00 10642.60 41.57 0.00 0.00 0.00 0.00 0.00 00:16:25.720 =================================================================================================================== 00:16:25.720 Total : 10642.60 41.57 0.00 0.00 0.00 0.00 0.00 00:16:25.720 00:16:27.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.094 Nvme0n1 : 6.00 10731.50 41.92 0.00 0.00 0.00 0.00 0.00 00:16:27.094 =================================================================================================================== 00:16:27.094 Total : 10731.50 41.92 0.00 0.00 0.00 0.00 0.00 00:16:27.094 00:16:28.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.029 Nvme0n1 : 7.00 10523.14 41.11 0.00 0.00 0.00 0.00 0.00 00:16:28.029 =================================================================================================================== 00:16:28.029 Total : 10523.14 41.11 0.00 0.00 0.00 0.00 0.00 00:16:28.029 00:16:28.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.964 Nvme0n1 : 8.00 10636.50 41.55 0.00 0.00 0.00 0.00 0.00 00:16:28.964 =================================================================================================================== 00:16:28.964 Total : 10636.50 41.55 0.00 0.00 0.00 0.00 0.00 00:16:28.964 00:16:29.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.899 Nvme0n1 : 9.00 10668.22 41.67 0.00 0.00 0.00 0.00 0.00 00:16:29.899 =================================================================================================================== 00:16:29.899 Total : 10668.22 41.67 0.00 0.00 0.00 0.00 0.00 00:16:29.899 00:16:30.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.837 Nvme0n1 : 10.00 10693.60 41.77 0.00 0.00 0.00 0.00 0.00 00:16:30.838 =================================================================================================================== 00:16:30.838 Total : 10693.60 41.77 0.00 0.00 0.00 0.00 0.00 00:16:30.838 00:16:30.838 00:16:30.838 Latency(us) 00:16:30.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.838 Nvme0n1 : 10.01 10694.80 41.78 0.00 0.00 11963.42 4774.77 175761.31 00:16:30.838 =================================================================================================================== 00:16:30.838 Total : 10694.80 41.78 0.00 0.00 11963.42 4774.77 175761.31 00:16:30.838 0 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65053 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 65053 ']' 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 65053 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65053 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:30.838 killing process with pid 65053 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65053' 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 65053 00:16:30.838 Received shutdown signal, test time was about 10.000000 seconds 00:16:30.838 00:16:30.838 Latency(us) 00:16:30.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.838 =================================================================================================================== 00:16:30.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.838 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 65053 00:16:31.101 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.367 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:31.634 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:31.634 09:09:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:31.903 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:31.903 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:31.903 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:32.174 [2024-05-15 09:09:44.429057] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:32.174 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:32.446 request: 00:16:32.446 { 00:16:32.446 "uuid": "14379913-119c-41d2-977d-fedd8c623a1d", 00:16:32.446 "method": "bdev_lvol_get_lvstores", 00:16:32.446 "req_id": 1 00:16:32.446 } 00:16:32.446 Got JSON-RPC error response 00:16:32.446 response: 00:16:32.446 { 00:16:32.446 "code": -19, 00:16:32.446 "message": "No such device" 00:16:32.446 } 00:16:32.446 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:32.446 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:32.446 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:32.446 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:32.446 09:09:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:32.719 aio_bdev 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b61b999-5214-4fe5-ae4e-62cfcd8cdaab 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=6b61b999-5214-4fe5-ae4e-62cfcd8cdaab 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:32.719 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:32.981 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b61b999-5214-4fe5-ae4e-62cfcd8cdaab -t 2000 00:16:33.239 [ 00:16:33.239 { 00:16:33.239 "name": "6b61b999-5214-4fe5-ae4e-62cfcd8cdaab", 00:16:33.239 "aliases": [ 00:16:33.239 "lvs/lvol" 00:16:33.239 ], 00:16:33.239 "product_name": "Logical Volume", 00:16:33.239 "block_size": 4096, 00:16:33.239 "num_blocks": 38912, 00:16:33.239 "uuid": "6b61b999-5214-4fe5-ae4e-62cfcd8cdaab", 00:16:33.239 "assigned_rate_limits": { 00:16:33.239 "rw_ios_per_sec": 0, 00:16:33.239 "rw_mbytes_per_sec": 0, 00:16:33.239 "r_mbytes_per_sec": 0, 00:16:33.239 "w_mbytes_per_sec": 0 00:16:33.239 }, 00:16:33.239 "claimed": false, 00:16:33.239 "zoned": false, 00:16:33.239 "supported_io_types": { 00:16:33.239 "read": true, 00:16:33.239 "write": true, 00:16:33.239 "unmap": true, 00:16:33.239 "write_zeroes": true, 00:16:33.239 "flush": false, 00:16:33.239 "reset": true, 00:16:33.239 "compare": false, 00:16:33.239 "compare_and_write": false, 00:16:33.239 "abort": false, 00:16:33.239 "nvme_admin": false, 00:16:33.239 "nvme_io": false 00:16:33.239 }, 00:16:33.239 "driver_specific": { 00:16:33.239 "lvol": { 00:16:33.239 "lvol_store_uuid": "14379913-119c-41d2-977d-fedd8c623a1d", 00:16:33.239 "base_bdev": "aio_bdev", 00:16:33.239 "thin_provision": false, 00:16:33.239 "num_allocated_clusters": 38, 00:16:33.239 "snapshot": false, 00:16:33.239 "clone": false, 00:16:33.239 "esnap_clone": false 00:16:33.239 } 00:16:33.239 } 00:16:33.239 } 00:16:33.239 ] 00:16:33.239 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:16:33.239 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:33.239 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:33.497 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:33.497 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:33.497 09:09:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:33.756 09:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:33.756 09:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6b61b999-5214-4fe5-ae4e-62cfcd8cdaab 00:16:34.323 09:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14379913-119c-41d2-977d-fedd8c623a1d 00:16:34.581 09:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:34.839 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:35.097 00:16:35.097 real 0m18.700s 00:16:35.097 user 0m16.691s 00:16:35.097 sys 0m3.047s 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:35.097 ************************************ 00:16:35.097 END TEST lvs_grow_clean 00:16:35.097 ************************************ 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.097 ************************************ 00:16:35.097 START TEST lvs_grow_dirty 00:16:35.097 ************************************ 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:35.097 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:35.663 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:35.663 09:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:35.920 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:35.920 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:35.920 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:36.178 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:36.178 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:36.178 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb lvol 150 00:16:36.438 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:36.438 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:36.438 09:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:36.699 [2024-05-15 09:09:49.035387] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:36.699 [2024-05-15 09:09:49.035467] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:36.699 true 00:16:36.699 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:36.700 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:36.960 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:36.960 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:37.219 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:37.477 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:37.477 [2024-05-15 09:09:49.911856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.736 09:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65322 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65322 /var/tmp/bdevperf.sock 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 65322 ']' 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:37.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:37.736 09:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:37.736 [2024-05-15 09:09:50.155437] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:37.736 [2024-05-15 09:09:50.155519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65322 ] 00:16:37.995 [2024-05-15 09:09:50.287761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.995 [2024-05-15 09:09:50.410909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.931 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:38.931 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:16:38.931 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:38.931 Nvme0n1 00:16:38.931 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:39.190 [ 00:16:39.190 { 00:16:39.190 "name": "Nvme0n1", 00:16:39.190 "aliases": [ 00:16:39.190 "fe45670e-ed1a-40d0-9a6f-2fcdcb7da816" 00:16:39.190 ], 00:16:39.190 "product_name": "NVMe disk", 00:16:39.190 "block_size": 4096, 00:16:39.190 "num_blocks": 38912, 00:16:39.190 "uuid": "fe45670e-ed1a-40d0-9a6f-2fcdcb7da816", 00:16:39.190 "assigned_rate_limits": { 00:16:39.190 "rw_ios_per_sec": 0, 00:16:39.190 "rw_mbytes_per_sec": 0, 00:16:39.190 "r_mbytes_per_sec": 0, 00:16:39.190 "w_mbytes_per_sec": 0 00:16:39.190 }, 00:16:39.190 "claimed": false, 00:16:39.190 "zoned": false, 00:16:39.190 "supported_io_types": { 00:16:39.190 "read": true, 00:16:39.190 "write": true, 00:16:39.190 "unmap": true, 00:16:39.190 "write_zeroes": true, 00:16:39.190 "flush": true, 00:16:39.190 "reset": true, 00:16:39.190 "compare": true, 00:16:39.190 "compare_and_write": true, 00:16:39.190 "abort": true, 00:16:39.190 "nvme_admin": true, 00:16:39.190 "nvme_io": true 00:16:39.190 }, 00:16:39.190 "memory_domains": [ 00:16:39.190 { 00:16:39.190 "dma_device_id": "system", 00:16:39.190 "dma_device_type": 1 00:16:39.190 } 00:16:39.190 ], 00:16:39.190 "driver_specific": { 00:16:39.190 "nvme": [ 00:16:39.190 { 00:16:39.190 "trid": { 00:16:39.190 "trtype": "TCP", 00:16:39.190 "adrfam": "IPv4", 00:16:39.190 "traddr": "10.0.0.2", 00:16:39.190 "trsvcid": "4420", 00:16:39.190 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:39.190 }, 00:16:39.190 "ctrlr_data": { 00:16:39.190 "cntlid": 1, 00:16:39.190 "vendor_id": "0x8086", 00:16:39.190 "model_number": "SPDK bdev Controller", 00:16:39.190 "serial_number": "SPDK0", 00:16:39.190 "firmware_revision": "24.05", 00:16:39.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:39.190 "oacs": { 00:16:39.190 "security": 0, 00:16:39.190 "format": 0, 00:16:39.190 "firmware": 0, 00:16:39.190 "ns_manage": 0 00:16:39.190 }, 00:16:39.190 "multi_ctrlr": true, 00:16:39.190 "ana_reporting": false 00:16:39.190 }, 00:16:39.190 "vs": { 00:16:39.190 "nvme_version": "1.3" 00:16:39.190 }, 00:16:39.190 "ns_data": { 00:16:39.190 "id": 1, 00:16:39.190 "can_share": true 00:16:39.190 } 00:16:39.190 } 00:16:39.190 ], 00:16:39.190 "mp_policy": "active_passive" 00:16:39.190 } 00:16:39.190 } 00:16:39.190 ] 00:16:39.190 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65351 00:16:39.190 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:39.190 09:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:39.449 Running I/O for 10 seconds... 00:16:40.383 Latency(us) 00:16:40.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.383 Nvme0n1 : 1.00 11176.00 43.66 0.00 0.00 0.00 0.00 0.00 00:16:40.383 =================================================================================================================== 00:16:40.383 Total : 11176.00 43.66 0.00 0.00 0.00 0.00 0.00 00:16:40.383 00:16:41.314 09:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:41.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.314 Nvme0n1 : 2.00 11176.00 43.66 0.00 0.00 0.00 0.00 0.00 00:16:41.314 =================================================================================================================== 00:16:41.314 Total : 11176.00 43.66 0.00 0.00 0.00 0.00 0.00 00:16:41.314 00:16:41.572 true 00:16:41.572 09:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:41.572 09:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:41.831 09:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:41.831 09:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:41.831 09:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65351 00:16:42.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.399 Nvme0n1 : 3.00 11091.33 43.33 0.00 0.00 0.00 0.00 0.00 00:16:42.399 =================================================================================================================== 00:16:42.399 Total : 11091.33 43.33 0.00 0.00 0.00 0.00 0.00 00:16:42.399 00:16:43.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.337 Nvme0n1 : 4.00 11080.75 43.28 0.00 0.00 0.00 0.00 0.00 00:16:43.337 =================================================================================================================== 00:16:43.337 Total : 11080.75 43.28 0.00 0.00 0.00 0.00 0.00 00:16:43.337 00:16:44.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.280 Nvme0n1 : 5.00 11049.00 43.16 0.00 0.00 0.00 0.00 0.00 00:16:44.280 =================================================================================================================== 00:16:44.280 Total : 11049.00 43.16 0.00 0.00 0.00 0.00 0.00 00:16:44.280 00:16:45.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.669 Nvme0n1 : 6.00 11027.83 43.08 0.00 0.00 0.00 0.00 0.00 00:16:45.669 =================================================================================================================== 00:16:45.669 Total : 11027.83 43.08 0.00 0.00 0.00 0.00 0.00 00:16:45.669 00:16:46.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.236 Nvme0n1 : 7.00 10727.29 41.90 0.00 0.00 0.00 0.00 0.00 00:16:46.236 =================================================================================================================== 00:16:46.236 Total : 10727.29 41.90 0.00 0.00 0.00 0.00 0.00 00:16:46.236 00:16:47.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.610 Nvme0n1 : 8.00 10735.75 41.94 0.00 0.00 0.00 0.00 0.00 00:16:47.610 =================================================================================================================== 00:16:47.610 Total : 10735.75 41.94 0.00 0.00 0.00 0.00 0.00 00:16:47.610 00:16:48.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.545 Nvme0n1 : 9.00 10742.33 41.96 0.00 0.00 0.00 0.00 0.00 00:16:48.545 =================================================================================================================== 00:16:48.545 Total : 10742.33 41.96 0.00 0.00 0.00 0.00 0.00 00:16:48.545 00:16:49.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.480 Nvme0n1 : 10.00 10760.30 42.03 0.00 0.00 0.00 0.00 0.00 00:16:49.480 =================================================================================================================== 00:16:49.480 Total : 10760.30 42.03 0.00 0.00 0.00 0.00 0.00 00:16:49.480 00:16:49.480 00:16:49.480 Latency(us) 00:16:49.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.480 Nvme0n1 : 10.00 10756.21 42.02 0.00 0.00 11894.21 4837.18 209715.20 00:16:49.480 =================================================================================================================== 00:16:49.480 Total : 10756.21 42.02 0.00 0.00 11894.21 4837.18 209715.20 00:16:49.480 0 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65322 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 65322 ']' 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 65322 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65322 00:16:49.480 killing process with pid 65322 00:16:49.480 Received shutdown signal, test time was about 10.000000 seconds 00:16:49.480 00:16:49.480 Latency(us) 00:16:49.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.480 =================================================================================================================== 00:16:49.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65322' 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 65322 00:16:49.480 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 65322 00:16:49.737 09:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:50.040 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:50.041 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:50.041 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64969 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64969 00:16:50.606 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64969 Killed "${NVMF_APP[@]}" "$@" 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65484 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65484 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 65484 ']' 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:50.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:50.606 09:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:50.606 [2024-05-15 09:10:02.843367] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:50.606 [2024-05-15 09:10:02.844581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.606 [2024-05-15 09:10:02.981869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.865 [2024-05-15 09:10:03.090395] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.865 [2024-05-15 09:10:03.090452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.865 [2024-05-15 09:10:03.090464] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.865 [2024-05-15 09:10:03.090475] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.865 [2024-05-15 09:10:03.090483] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.865 [2024-05-15 09:10:03.090510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.800 09:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:52.058 [2024-05-15 09:10:04.251306] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:52.058 [2024-05-15 09:10:04.252194] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:52.058 [2024-05-15 09:10:04.252888] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:52.058 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:52.316 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 -t 2000 00:16:52.574 [ 00:16:52.574 { 00:16:52.574 "name": "fe45670e-ed1a-40d0-9a6f-2fcdcb7da816", 00:16:52.574 "aliases": [ 00:16:52.574 "lvs/lvol" 00:16:52.574 ], 00:16:52.574 "product_name": "Logical Volume", 00:16:52.574 "block_size": 4096, 00:16:52.574 "num_blocks": 38912, 00:16:52.574 "uuid": "fe45670e-ed1a-40d0-9a6f-2fcdcb7da816", 00:16:52.574 "assigned_rate_limits": { 00:16:52.574 "rw_ios_per_sec": 0, 00:16:52.574 "rw_mbytes_per_sec": 0, 00:16:52.574 "r_mbytes_per_sec": 0, 00:16:52.574 "w_mbytes_per_sec": 0 00:16:52.574 }, 00:16:52.574 "claimed": false, 00:16:52.574 "zoned": false, 00:16:52.574 "supported_io_types": { 00:16:52.574 "read": true, 00:16:52.574 "write": true, 00:16:52.574 "unmap": true, 00:16:52.574 "write_zeroes": true, 00:16:52.574 "flush": false, 00:16:52.574 "reset": true, 00:16:52.574 "compare": false, 00:16:52.574 "compare_and_write": false, 00:16:52.574 "abort": false, 00:16:52.574 "nvme_admin": false, 00:16:52.574 "nvme_io": false 00:16:52.574 }, 00:16:52.574 "driver_specific": { 00:16:52.574 "lvol": { 00:16:52.574 "lvol_store_uuid": "5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb", 00:16:52.574 "base_bdev": "aio_bdev", 00:16:52.574 "thin_provision": false, 00:16:52.574 "num_allocated_clusters": 38, 00:16:52.574 "snapshot": false, 00:16:52.574 "clone": false, 00:16:52.574 "esnap_clone": false 00:16:52.574 } 00:16:52.574 } 00:16:52.574 } 00:16:52.574 ] 00:16:52.574 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:16:52.574 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:52.574 09:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:52.832 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:52.833 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:52.833 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:53.091 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:53.091 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:53.349 [2024-05-15 09:10:05.676284] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:53.349 09:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:53.607 request: 00:16:53.607 { 00:16:53.607 "uuid": "5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb", 00:16:53.607 "method": "bdev_lvol_get_lvstores", 00:16:53.607 "req_id": 1 00:16:53.607 } 00:16:53.607 Got JSON-RPC error response 00:16:53.607 response: 00:16:53.607 { 00:16:53.607 "code": -19, 00:16:53.607 "message": "No such device" 00:16:53.607 } 00:16:53.607 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:16:53.607 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:53.607 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:53.607 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:53.607 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:53.865 aio_bdev 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:53.865 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.123 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 -t 2000 00:16:54.382 [ 00:16:54.382 { 00:16:54.382 "name": "fe45670e-ed1a-40d0-9a6f-2fcdcb7da816", 00:16:54.382 "aliases": [ 00:16:54.382 "lvs/lvol" 00:16:54.382 ], 00:16:54.382 "product_name": "Logical Volume", 00:16:54.382 "block_size": 4096, 00:16:54.382 "num_blocks": 38912, 00:16:54.382 "uuid": "fe45670e-ed1a-40d0-9a6f-2fcdcb7da816", 00:16:54.382 "assigned_rate_limits": { 00:16:54.382 "rw_ios_per_sec": 0, 00:16:54.382 "rw_mbytes_per_sec": 0, 00:16:54.382 "r_mbytes_per_sec": 0, 00:16:54.382 "w_mbytes_per_sec": 0 00:16:54.382 }, 00:16:54.382 "claimed": false, 00:16:54.382 "zoned": false, 00:16:54.382 "supported_io_types": { 00:16:54.382 "read": true, 00:16:54.382 "write": true, 00:16:54.382 "unmap": true, 00:16:54.382 "write_zeroes": true, 00:16:54.382 "flush": false, 00:16:54.382 "reset": true, 00:16:54.382 "compare": false, 00:16:54.382 "compare_and_write": false, 00:16:54.382 "abort": false, 00:16:54.382 "nvme_admin": false, 00:16:54.382 "nvme_io": false 00:16:54.382 }, 00:16:54.382 "driver_specific": { 00:16:54.382 "lvol": { 00:16:54.382 "lvol_store_uuid": "5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb", 00:16:54.382 "base_bdev": "aio_bdev", 00:16:54.382 "thin_provision": false, 00:16:54.382 "num_allocated_clusters": 38, 00:16:54.382 "snapshot": false, 00:16:54.382 "clone": false, 00:16:54.382 "esnap_clone": false 00:16:54.382 } 00:16:54.382 } 00:16:54.382 } 00:16:54.382 ] 00:16:54.382 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:16:54.382 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:54.382 09:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:54.640 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:54.640 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:54.640 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:55.204 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:55.204 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fe45670e-ed1a-40d0-9a6f-2fcdcb7da816 00:16:55.463 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bceeaf2-6f9b-4cfe-9e0e-418ee0b202cb 00:16:55.720 09:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:55.978 09:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:56.237 ************************************ 00:16:56.237 END TEST lvs_grow_dirty 00:16:56.237 ************************************ 00:16:56.237 00:16:56.237 real 0m21.151s 00:16:56.237 user 0m45.535s 00:16:56.237 sys 0m9.650s 00:16:56.237 09:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:56.237 09:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:56.495 nvmf_trace.0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.495 rmmod nvme_tcp 00:16:56.495 rmmod nvme_fabrics 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65484 ']' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65484 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 65484 ']' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 65484 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65484 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65484' 00:16:56.495 killing process with pid 65484 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 65484 00:16:56.495 09:10:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 65484 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:56.754 00:16:56.754 real 0m42.242s 00:16:56.754 user 1m9.066s 00:16:56.754 sys 0m13.394s 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:56.754 ************************************ 00:16:56.754 END TEST nvmf_lvs_grow 00:16:56.754 ************************************ 00:16:56.754 09:10:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:57.013 09:10:09 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:57.013 09:10:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:57.013 09:10:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:57.013 09:10:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.013 ************************************ 00:16:57.013 START TEST nvmf_bdev_io_wait 00:16:57.013 ************************************ 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:57.013 * Looking for test storage... 00:16:57.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:57.013 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:57.013 Cannot find device "nvmf_tgt_br" 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.014 Cannot find device "nvmf_tgt_br2" 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:57.014 Cannot find device "nvmf_tgt_br" 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:16:57.014 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:57.275 Cannot find device "nvmf_tgt_br2" 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.275 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:57.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:57.534 00:16:57.534 --- 10.0.0.2 ping statistics --- 00:16:57.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.534 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:57.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:57.534 00:16:57.534 --- 10.0.0.3 ping statistics --- 00:16:57.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.534 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:57.534 00:16:57.534 --- 10.0.0.1 ping statistics --- 00:16:57.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.534 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65809 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65809 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 65809 ']' 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.534 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:57.535 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.535 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:57.535 09:10:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.535 [2024-05-15 09:10:09.831525] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:57.535 [2024-05-15 09:10:09.831809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.795 [2024-05-15 09:10:09.984992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.795 [2024-05-15 09:10:10.104079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.795 [2024-05-15 09:10:10.104335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.795 [2024-05-15 09:10:10.104439] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.795 [2024-05-15 09:10:10.104491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.795 [2024-05-15 09:10:10.104519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.795 [2024-05-15 09:10:10.104712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.795 [2024-05-15 09:10:10.104821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.795 [2024-05-15 09:10:10.105072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.795 [2024-05-15 09:10:10.105078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.469 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.728 [2024-05-15 09:10:10.898824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.728 Malloc0 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.728 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.729 [2024-05-15 09:10:10.954120] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:58.729 [2024-05-15 09:10:10.954653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65845 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65846 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65849 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65850 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:58.729 { 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme$subsystem", 00:16:58.729 "trtype": "$TEST_TRANSPORT", 00:16:58.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "$NVMF_PORT", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.729 "hdgst": ${hdgst:-false}, 00:16:58.729 "ddgst": ${ddgst:-false} 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 } 00:16:58.729 EOF 00:16:58.729 )") 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:58.729 { 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme$subsystem", 00:16:58.729 "trtype": "$TEST_TRANSPORT", 00:16:58.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "$NVMF_PORT", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.729 "hdgst": ${hdgst:-false}, 00:16:58.729 "ddgst": ${ddgst:-false} 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 } 00:16:58.729 EOF 00:16:58.729 )") 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme1", 00:16:58.729 "trtype": "tcp", 00:16:58.729 "traddr": "10.0.0.2", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "4420", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.729 "hdgst": false, 00:16:58.729 "ddgst": false 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 }' 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme1", 00:16:58.729 "trtype": "tcp", 00:16:58.729 "traddr": "10.0.0.2", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "4420", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.729 "hdgst": false, 00:16:58.729 "ddgst": false 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 }' 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:58.729 { 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme$subsystem", 00:16:58.729 "trtype": "$TEST_TRANSPORT", 00:16:58.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "$NVMF_PORT", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.729 "hdgst": ${hdgst:-false}, 00:16:58.729 "ddgst": ${ddgst:-false} 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 } 00:16:58.729 EOF 00:16:58.729 )") 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:58.729 { 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme$subsystem", 00:16:58.729 "trtype": "$TEST_TRANSPORT", 00:16:58.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "$NVMF_PORT", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.729 "hdgst": ${hdgst:-false}, 00:16:58.729 "ddgst": ${ddgst:-false} 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 } 00:16:58.729 EOF 00:16:58.729 )") 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:58.729 09:10:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65845 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme1", 00:16:58.729 "trtype": "tcp", 00:16:58.729 "traddr": "10.0.0.2", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "4420", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.729 "hdgst": false, 00:16:58.729 "ddgst": false 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 }' 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:58.729 [2024-05-15 09:10:11.021237] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:58.729 [2024-05-15 09:10:11.021352] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:58.729 [2024-05-15 09:10:11.021854] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:58.729 [2024-05-15 09:10:11.022093] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:58.729 [2024-05-15 09:10:11.024421] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:58.729 [2024-05-15 09:10:11.024740] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:58.729 [2024-05-15 09:10:11.025854] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:16:58.729 [2024-05-15 09:10:11.026095] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:58.729 09:10:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:58.729 "params": { 00:16:58.729 "name": "Nvme1", 00:16:58.729 "trtype": "tcp", 00:16:58.729 "traddr": "10.0.0.2", 00:16:58.729 "adrfam": "ipv4", 00:16:58.729 "trsvcid": "4420", 00:16:58.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.729 "hdgst": false, 00:16:58.729 "ddgst": false 00:16:58.729 }, 00:16:58.729 "method": "bdev_nvme_attach_controller" 00:16:58.729 }' 00:16:58.988 [2024-05-15 09:10:11.292239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.988 [2024-05-15 09:10:11.354084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.988 [2024-05-15 09:10:11.400849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:58.988 [2024-05-15 09:10:11.424673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.247 [2024-05-15 09:10:11.467927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.247 [2024-05-15 09:10:11.489655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.247 [2024-05-15 09:10:11.522669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:59.247 Running I/O for 1 seconds... 00:16:59.247 [2024-05-15 09:10:11.591313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:59.247 Running I/O for 1 seconds... 00:16:59.247 Running I/O for 1 seconds... 00:16:59.505 Running I/O for 1 seconds... 00:17:00.441 00:17:00.441 Latency(us) 00:17:00.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.441 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:00.441 Nvme1n1 : 1.02 7040.78 27.50 0.00 0.00 17914.17 5461.33 34952.53 00:17:00.441 =================================================================================================================== 00:17:00.441 Total : 7040.78 27.50 0.00 0.00 17914.17 5461.33 34952.53 00:17:00.441 00:17:00.441 Latency(us) 00:17:00.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.441 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:00.441 Nvme1n1 : 1.01 6449.39 25.19 0.00 0.00 19776.54 6303.94 40944.40 00:17:00.441 =================================================================================================================== 00:17:00.441 Total : 6449.39 25.19 0.00 0.00 19776.54 6303.94 40944.40 00:17:00.441 00:17:00.441 Latency(us) 00:17:00.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.441 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:00.441 Nvme1n1 : 1.00 180931.07 706.76 0.00 0.00 705.00 331.58 1209.30 00:17:00.441 =================================================================================================================== 00:17:00.441 Total : 180931.07 706.76 0.00 0.00 705.00 331.58 1209.30 00:17:00.441 00:17:00.441 Latency(us) 00:17:00.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.441 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:00.441 Nvme1n1 : 1.01 8132.56 31.77 0.00 0.00 15666.08 8238.81 31831.77 00:17:00.441 =================================================================================================================== 00:17:00.441 Total : 8132.56 31.77 0.00 0.00 15666.08 8238.81 31831.77 00:17:00.699 09:10:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65846 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65849 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65850 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.699 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.699 rmmod nvme_tcp 00:17:00.699 rmmod nvme_fabrics 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65809 ']' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65809 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 65809 ']' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 65809 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65809 00:17:00.956 killing process with pid 65809 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65809' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 65809 00:17:00.956 [2024-05-15 09:10:13.185660] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 65809 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.956 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.215 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:01.215 00:17:01.215 real 0m4.223s 00:17:01.215 user 0m18.043s 00:17:01.215 sys 0m2.446s 00:17:01.215 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:01.215 09:10:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:01.215 ************************************ 00:17:01.215 END TEST nvmf_bdev_io_wait 00:17:01.215 ************************************ 00:17:01.215 09:10:13 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:01.215 09:10:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:01.215 09:10:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:01.215 09:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.215 ************************************ 00:17:01.215 START TEST nvmf_queue_depth 00:17:01.215 ************************************ 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:01.215 * Looking for test storage... 00:17:01.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:01.215 Cannot find device "nvmf_tgt_br" 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:17:01.215 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.555 Cannot find device "nvmf_tgt_br2" 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:01.555 Cannot find device "nvmf_tgt_br" 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:01.555 Cannot find device "nvmf_tgt_br2" 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:01.555 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.556 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:01.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:17:01.814 00:17:01.814 --- 10.0.0.2 ping statistics --- 00:17:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.814 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:01.814 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:01.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:17:01.814 00:17:01.814 --- 10.0.0.3 ping statistics --- 00:17:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.814 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:01.814 09:10:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:01.814 00:17:01.814 --- 10.0.0.1 ping statistics --- 00:17:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.814 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66079 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66079 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 66079 ']' 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:01.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:01.814 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:01.814 [2024-05-15 09:10:14.109082] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:17:01.814 [2024-05-15 09:10:14.109249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.072 [2024-05-15 09:10:14.260004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.072 [2024-05-15 09:10:14.430073] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.072 [2024-05-15 09:10:14.430145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.072 [2024-05-15 09:10:14.430162] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.072 [2024-05-15 09:10:14.430176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.072 [2024-05-15 09:10:14.430187] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.072 [2024-05-15 09:10:14.430227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.639 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:02.639 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:02.639 09:10:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.639 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:02.639 09:10:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 [2024-05-15 09:10:15.047958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.639 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.897 Malloc0 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.897 [2024-05-15 09:10:15.112778] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:02.897 [2024-05-15 09:10:15.113058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66117 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66117 /var/tmp/bdevperf.sock 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 66117 ']' 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:02.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:02.897 09:10:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:02.897 [2024-05-15 09:10:15.176511] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:17:02.897 [2024-05-15 09:10:15.176639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66117 ] 00:17:02.897 [2024-05-15 09:10:15.323204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.156 [2024-05-15 09:10:15.439801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:04.089 NVMe0n1 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.089 09:10:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:04.089 Running I/O for 10 seconds... 00:17:14.112 00:17:14.112 Latency(us) 00:17:14.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.112 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:14.112 Verification LBA range: start 0x0 length 0x4000 00:17:14.112 NVMe0n1 : 10.09 9014.67 35.21 0.00 0.00 113051.70 24591.60 89877.94 00:17:14.112 =================================================================================================================== 00:17:14.112 Total : 9014.67 35.21 0.00 0.00 113051.70 24591.60 89877.94 00:17:14.112 0 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66117 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 66117 ']' 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 66117 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66117 00:17:14.372 killing process with pid 66117 00:17:14.372 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.372 00:17:14.372 Latency(us) 00:17:14.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.372 =================================================================================================================== 00:17:14.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66117' 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 66117 00:17:14.372 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 66117 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.631 rmmod nvme_tcp 00:17:14.631 rmmod nvme_fabrics 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66079 ']' 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66079 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 66079 ']' 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 66079 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66079 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66079' 00:17:14.631 killing process with pid 66079 00:17:14.631 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 66079 00:17:14.631 [2024-05-15 09:10:26.948492] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 66079 00:17:14.631 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:14.890 00:17:14.890 real 0m13.738s 00:17:14.890 user 0m23.197s 00:17:14.890 sys 0m2.875s 00:17:14.890 ************************************ 00:17:14.890 END TEST nvmf_queue_depth 00:17:14.890 ************************************ 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:14.890 09:10:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 09:10:27 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:14.890 09:10:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:14.890 09:10:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:14.890 09:10:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 ************************************ 00:17:14.890 START TEST nvmf_target_multipath 00:17:14.890 ************************************ 00:17:14.890 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:15.149 * Looking for test storage... 00:17:15.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.149 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:15.150 Cannot find device "nvmf_tgt_br" 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:15.150 Cannot find device "nvmf_tgt_br2" 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:15.150 Cannot find device "nvmf_tgt_br" 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:15.150 Cannot find device "nvmf_tgt_br2" 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:15.150 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:15.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.409 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:15.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:15.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:15.410 00:17:15.410 --- 10.0.0.2 ping statistics --- 00:17:15.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.410 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:15.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:15.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:15.410 00:17:15.410 --- 10.0.0.3 ping statistics --- 00:17:15.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.410 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:15.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:15.410 00:17:15.410 --- 10.0.0.1 ping statistics --- 00:17:15.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.410 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66439 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66439 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@828 -- # '[' -z 66439 ']' 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:15.410 09:10:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:15.669 [2024-05-15 09:10:27.875590] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:17:15.669 [2024-05-15 09:10:27.876276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.669 [2024-05-15 09:10:28.025670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.927 [2024-05-15 09:10:28.136728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.927 [2024-05-15 09:10:28.136989] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.927 [2024-05-15 09:10:28.137126] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.928 [2024-05-15 09:10:28.137180] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.928 [2024-05-15 09:10:28.137210] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.928 [2024-05-15 09:10:28.137417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.928 [2024-05-15 09:10:28.137619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.928 [2024-05-15 09:10:28.138415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.928 [2024-05-15 09:10:28.138415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@861 -- # return 0 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.525 09:10:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:16.783 [2024-05-15 09:10:29.165372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.783 09:10:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:17.041 Malloc0 00:17:17.041 09:10:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:17:17.300 09:10:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.867 09:10:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.867 [2024-05-15 09:10:30.224729] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:17.867 [2024-05-15 09:10:30.225388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.867 09:10:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:18.124 [2024-05-15 09:10:30.449218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.124 09:10:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:17:18.383 09:10:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:17:18.383 09:10:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.383 09:10:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local i=0 00:17:18.383 09:10:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.383 09:10:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:17:18.383 09:10:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # sleep 2 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # return 0 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:20.908 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66533 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:20.909 09:10:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:17:20.909 [global] 00:17:20.909 thread=1 00:17:20.909 invalidate=1 00:17:20.909 rw=randrw 00:17:20.909 time_based=1 00:17:20.909 runtime=6 00:17:20.909 ioengine=libaio 00:17:20.909 direct=1 00:17:20.909 bs=4096 00:17:20.909 iodepth=128 00:17:20.909 norandommap=0 00:17:20.909 numjobs=1 00:17:20.909 00:17:20.909 verify_dump=1 00:17:20.909 verify_backlog=512 00:17:20.909 verify_state_save=0 00:17:20.909 do_verify=1 00:17:20.909 verify=crc32c-intel 00:17:20.909 [job0] 00:17:20.909 filename=/dev/nvme0n1 00:17:20.909 Could not set queue depth (nvme0n1) 00:17:20.909 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.909 fio-3.35 00:17:20.909 Starting 1 thread 00:17:21.473 09:10:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:21.730 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:22.296 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:17:22.296 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:22.296 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:22.296 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:22.296 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:22.296 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:22.297 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:22.554 09:10:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:22.812 09:10:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66533 00:17:27.051 00:17:27.051 job0: (groupid=0, jobs=1): err= 0: pid=66555: Wed May 15 09:10:39 2024 00:17:27.051 read: IOPS=9684, BW=37.8MiB/s (39.7MB/s)(227MiB/6006msec) 00:17:27.051 slat (usec): min=4, max=8968, avg=62.22, stdev=263.67 00:17:27.051 clat (usec): min=1452, max=27422, avg=8945.23, stdev=2435.81 00:17:27.051 lat (usec): min=1464, max=28007, avg=9007.45, stdev=2448.71 00:17:27.051 clat percentiles (usec): 00:17:27.051 | 1.00th=[ 4686], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 7570], 00:17:27.051 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8717], 00:17:27.051 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11731], 95.00th=[13304], 00:17:27.051 | 99.00th=[17957], 99.50th=[20055], 99.90th=[25560], 99.95th=[26084], 00:17:27.051 | 99.99th=[27395] 00:17:27.051 bw ( KiB/s): min= 7784, max=25848, per=52.43%, avg=20310.00, stdev=5826.18, samples=12 00:17:27.051 iops : min= 1946, max= 6462, avg=5077.50, stdev=1456.54, samples=12 00:17:27.051 write: IOPS=5579, BW=21.8MiB/s (22.9MB/s)(119MiB/5481msec); 0 zone resets 00:17:27.051 slat (usec): min=9, max=3029, avg=67.75, stdev=195.17 00:17:27.051 clat (usec): min=2529, max=25727, avg=7904.15, stdev=2199.23 00:17:27.051 lat (usec): min=2548, max=27512, avg=7971.89, stdev=2212.26 00:17:27.051 clat percentiles (usec): 00:17:27.051 | 1.00th=[ 3490], 5.00th=[ 4621], 10.00th=[ 5800], 20.00th=[ 6783], 00:17:27.051 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:17:27.051 | 70.00th=[ 8225], 80.00th=[ 8848], 90.00th=[10028], 95.00th=[11338], 00:17:27.051 | 99.00th=[16450], 99.50th=[17695], 99.90th=[20055], 99.95th=[23200], 00:17:27.051 | 99.99th=[24511] 00:17:27.051 bw ( KiB/s): min= 8376, max=25408, per=91.15%, avg=20343.33, stdev=5582.99, samples=12 00:17:27.051 iops : min= 2094, max= 6352, avg=5085.83, stdev=1395.75, samples=12 00:17:27.051 lat (msec) : 2=0.01%, 4=1.17%, 10=81.87%, 20=16.57%, 50=0.38% 00:17:27.051 cpu : usr=5.06%, sys=18.56%, ctx=5108, majf=0, minf=96 00:17:27.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:27.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.052 issued rwts: total=58166,30580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.052 00:17:27.052 Run status group 0 (all jobs): 00:17:27.052 READ: bw=37.8MiB/s (39.7MB/s), 37.8MiB/s-37.8MiB/s (39.7MB/s-39.7MB/s), io=227MiB (238MB), run=6006-6006msec 00:17:27.052 WRITE: bw=21.8MiB/s (22.9MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=119MiB (125MB), run=5481-5481msec 00:17:27.052 00:17:27.052 Disk stats (read/write): 00:17:27.052 nvme0n1: ios=57499/29659, merge=0/0, ticks=495706/222394, in_queue=718100, util=98.52% 00:17:27.052 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:27.052 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66635 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:27.310 09:10:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:17:27.568 [global] 00:17:27.568 thread=1 00:17:27.568 invalidate=1 00:17:27.568 rw=randrw 00:17:27.568 time_based=1 00:17:27.568 runtime=6 00:17:27.568 ioengine=libaio 00:17:27.568 direct=1 00:17:27.568 bs=4096 00:17:27.568 iodepth=128 00:17:27.568 norandommap=0 00:17:27.568 numjobs=1 00:17:27.568 00:17:27.568 verify_dump=1 00:17:27.568 verify_backlog=512 00:17:27.568 verify_state_save=0 00:17:27.568 do_verify=1 00:17:27.568 verify=crc32c-intel 00:17:27.568 [job0] 00:17:27.568 filename=/dev/nvme0n1 00:17:27.568 Could not set queue depth (nvme0n1) 00:17:27.568 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.568 fio-3.35 00:17:27.568 Starting 1 thread 00:17:28.500 09:10:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:28.776 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:29.034 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:29.292 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:29.550 09:10:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66635 00:17:33.834 00:17:33.834 job0: (groupid=0, jobs=1): err= 0: pid=66656: Wed May 15 09:10:46 2024 00:17:33.834 read: IOPS=10.5k, BW=41.0MiB/s (42.9MB/s)(246MiB/6006msec) 00:17:33.834 slat (usec): min=5, max=5976, avg=48.23, stdev=195.65 00:17:33.834 clat (usec): min=342, max=23945, avg=8307.95, stdev=3463.98 00:17:33.834 lat (usec): min=369, max=23958, avg=8356.18, stdev=3464.76 00:17:33.834 clat percentiles (usec): 00:17:33.834 | 1.00th=[ 988], 5.00th=[ 1991], 10.00th=[ 4752], 20.00th=[ 6915], 00:17:33.834 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8094], 00:17:33.834 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[12256], 95.00th=[16188], 00:17:33.834 | 99.00th=[19530], 99.50th=[20841], 99.90th=[22414], 99.95th=[22676], 00:17:33.834 | 99.99th=[23462] 00:17:33.834 bw ( KiB/s): min=11136, max=29168, per=51.59%, avg=21637.82, stdev=5651.12, samples=11 00:17:33.834 iops : min= 2784, max= 7292, avg=5409.45, stdev=1412.78, samples=11 00:17:33.834 write: IOPS=6166, BW=24.1MiB/s (25.3MB/s)(130MiB/5395msec); 0 zone resets 00:17:33.834 slat (usec): min=7, max=2310, avg=53.08, stdev=144.06 00:17:33.834 clat (usec): min=233, max=21560, avg=7192.63, stdev=3115.50 00:17:33.834 lat (usec): min=279, max=21579, avg=7245.71, stdev=3116.29 00:17:33.834 clat percentiles (usec): 00:17:33.834 | 1.00th=[ 840], 5.00th=[ 1532], 10.00th=[ 3294], 20.00th=[ 5735], 00:17:33.834 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:17:33.834 | 70.00th=[ 7701], 80.00th=[ 8291], 90.00th=[10552], 95.00th=[14353], 00:17:33.834 | 99.00th=[16188], 99.50th=[17433], 99.90th=[20055], 99.95th=[20317], 00:17:33.834 | 99.99th=[21365] 00:17:33.834 bw ( KiB/s): min=11160, max=28608, per=87.92%, avg=21685.09, stdev=5504.99, samples=11 00:17:33.834 iops : min= 2790, max= 7152, avg=5421.27, stdev=1376.25, samples=11 00:17:33.834 lat (usec) : 250=0.01%, 500=0.10%, 750=0.40%, 1000=0.83% 00:17:33.834 lat (msec) : 2=4.23%, 4=4.54%, 10=73.97%, 20=15.34%, 50=0.58% 00:17:33.834 cpu : usr=5.49%, sys=20.23%, ctx=6797, majf=0, minf=96 00:17:33.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:33.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:33.834 issued rwts: total=62975,33266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:33.834 00:17:33.834 Run status group 0 (all jobs): 00:17:33.834 READ: bw=41.0MiB/s (42.9MB/s), 41.0MiB/s-41.0MiB/s (42.9MB/s-42.9MB/s), io=246MiB (258MB), run=6006-6006msec 00:17:33.834 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=130MiB (136MB), run=5395-5395msec 00:17:33.834 00:17:33.834 Disk stats (read/write): 00:17:33.834 nvme0n1: ios=62001/32602, merge=0/0, ticks=496674/222086, in_queue=718760, util=98.68% 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:33.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # local i=0 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1228 -- # return 0 00:17:33.834 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.092 rmmod nvme_tcp 00:17:34.092 rmmod nvme_fabrics 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66439 ']' 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66439 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@947 -- # '[' -z 66439 ']' 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # kill -0 66439 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # uname 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:34.092 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66439 00:17:34.407 killing process with pid 66439 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66439' 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # kill 66439 00:17:34.407 [2024-05-15 09:10:46.546155] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@971 -- # wait 66439 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.407 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.408 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.408 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.408 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.408 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.408 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.408 09:10:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:34.666 ************************************ 00:17:34.666 END TEST nvmf_target_multipath 00:17:34.666 ************************************ 00:17:34.666 00:17:34.666 real 0m19.557s 00:17:34.666 user 1m12.676s 00:17:34.666 sys 0m9.952s 00:17:34.666 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:34.666 09:10:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:34.666 09:10:46 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:34.666 09:10:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:34.666 09:10:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:34.666 09:10:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.666 ************************************ 00:17:34.666 START TEST nvmf_zcopy 00:17:34.666 ************************************ 00:17:34.666 09:10:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:34.666 * Looking for test storage... 00:17:34.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:34.666 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:34.667 Cannot find device "nvmf_tgt_br" 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.667 Cannot find device "nvmf_tgt_br2" 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:34.667 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:34.925 Cannot find device "nvmf_tgt_br" 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:34.925 Cannot find device "nvmf_tgt_br2" 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.925 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:17:35.183 00:17:35.183 --- 10.0.0.2 ping statistics --- 00:17:35.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.183 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:35.183 00:17:35.183 --- 10.0.0.3 ping statistics --- 00:17:35.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.183 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:17:35.183 00:17:35.183 --- 10.0.0.1 ping statistics --- 00:17:35.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.183 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66901 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66901 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 66901 ']' 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:35.183 09:10:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:35.183 [2024-05-15 09:10:47.571110] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:17:35.183 [2024-05-15 09:10:47.571416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.442 [2024-05-15 09:10:47.716009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.442 [2024-05-15 09:10:47.878740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.442 [2024-05-15 09:10:47.879030] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.442 [2024-05-15 09:10:47.879148] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.442 [2024-05-15 09:10:47.879205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.442 [2024-05-15 09:10:47.879236] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.442 [2024-05-15 09:10:47.879298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 [2024-05-15 09:10:48.665592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 [2024-05-15 09:10:48.689523] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:36.378 [2024-05-15 09:10:48.689903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 malloc0 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:36.378 { 00:17:36.378 "params": { 00:17:36.378 "name": "Nvme$subsystem", 00:17:36.378 "trtype": "$TEST_TRANSPORT", 00:17:36.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.378 "adrfam": "ipv4", 00:17:36.378 "trsvcid": "$NVMF_PORT", 00:17:36.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.378 "hdgst": ${hdgst:-false}, 00:17:36.378 "ddgst": ${ddgst:-false} 00:17:36.378 }, 00:17:36.378 "method": "bdev_nvme_attach_controller" 00:17:36.378 } 00:17:36.378 EOF 00:17:36.378 )") 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:36.378 09:10:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:36.378 "params": { 00:17:36.378 "name": "Nvme1", 00:17:36.378 "trtype": "tcp", 00:17:36.378 "traddr": "10.0.0.2", 00:17:36.378 "adrfam": "ipv4", 00:17:36.378 "trsvcid": "4420", 00:17:36.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.378 "hdgst": false, 00:17:36.378 "ddgst": false 00:17:36.378 }, 00:17:36.378 "method": "bdev_nvme_attach_controller" 00:17:36.378 }' 00:17:36.379 [2024-05-15 09:10:48.786058] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:17:36.379 [2024-05-15 09:10:48.786362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66940 ] 00:17:36.673 [2024-05-15 09:10:48.926779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.673 [2024-05-15 09:10:49.045238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.930 Running I/O for 10 seconds... 00:17:46.894 00:17:46.894 Latency(us) 00:17:46.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.894 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:46.894 Verification LBA range: start 0x0 length 0x1000 00:17:46.894 Nvme1n1 : 10.02 6423.63 50.18 0.00 0.00 19864.38 2590.23 50181.85 00:17:46.894 =================================================================================================================== 00:17:46.894 Total : 6423.63 50.18 0.00 0.00 19864.38 2590.23 50181.85 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67056 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.167 { 00:17:47.167 "params": { 00:17:47.167 "name": "Nvme$subsystem", 00:17:47.167 "trtype": "$TEST_TRANSPORT", 00:17:47.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.167 "adrfam": "ipv4", 00:17:47.167 "trsvcid": "$NVMF_PORT", 00:17:47.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.167 "hdgst": ${hdgst:-false}, 00:17:47.167 "ddgst": ${ddgst:-false} 00:17:47.167 }, 00:17:47.167 "method": "bdev_nvme_attach_controller" 00:17:47.167 } 00:17:47.167 EOF 00:17:47.167 )") 00:17:47.167 [2024-05-15 09:10:59.466642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.468091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:47.167 [2024-05-15 09:10:59.474615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.474641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:47.167 09:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.167 "params": { 00:17:47.167 "name": "Nvme1", 00:17:47.167 "trtype": "tcp", 00:17:47.167 "traddr": "10.0.0.2", 00:17:47.167 "adrfam": "ipv4", 00:17:47.167 "trsvcid": "4420", 00:17:47.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.167 "hdgst": false, 00:17:47.167 "ddgst": false 00:17:47.167 }, 00:17:47.167 "method": "bdev_nvme_attach_controller" 00:17:47.167 }' 00:17:47.167 [2024-05-15 09:10:59.486624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.486809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.498661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.498898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.506557] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:17:47.167 [2024-05-15 09:10:59.506773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67056 ] 00:17:47.167 [2024-05-15 09:10:59.510638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.510824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.522621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.522770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.534651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.534869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.546635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.546806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.558637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.558778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.570647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.570807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.582637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.582808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.594622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.594792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.167 [2024-05-15 09:10:59.606626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.167 [2024-05-15 09:10:59.606751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.618630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.618734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.630642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.630763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.642662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.642805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.646597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.427 [2024-05-15 09:10:59.650647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.650770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.658665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.658807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.666672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.666785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.674656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.674760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.682668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.682806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.690659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.690763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.698671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.698800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.706682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.706809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.714687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.714820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.722686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.722813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.730693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.730836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.738682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.738784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.746696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.746816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.754707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.754847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.762714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.762851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.770705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.770828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.778713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.778837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.786705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.786716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.427 [2024-05-15 09:10:59.786860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.794709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.794848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.802710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.802845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.810719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.810860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.818717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.818839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.826725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.826867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.834719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.834860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.842718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.842827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.850719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.850833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.858718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.858824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.427 [2024-05-15 09:10:59.866738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.427 [2024-05-15 09:10:59.866878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.874723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.874851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.882725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.882828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.890725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.890825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.898731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.898835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.906740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.906875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.914747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.914869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.922757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.922898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.930764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.930928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.938778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.938900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.946776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.946919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.954780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.954899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.962783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.962919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.970794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.970927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.978797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.978927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 Running I/O for 5 seconds... 00:17:47.686 [2024-05-15 09:10:59.990596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.990744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:10:59.999121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:10:59.999253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.010793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.010952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.022408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.022632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.031589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.031742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.043840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.044049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.054559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.054694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.063266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.063404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.076069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.076259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.086041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.086185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.095687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.095764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.105343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.105495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.115141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.115265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.686 [2024-05-15 09:11:00.125030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.686 [2024-05-15 09:11:00.125161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.134719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.134845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.144119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.144260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.153730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.153894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.163266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.163424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.172914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.173042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.182363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.182494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.192142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.192283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.201746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.201895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.211026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.211162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.220374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.220510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.230277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.230423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.240034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.240171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.249347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.249495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.258815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.258943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.267946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.268123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.277289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.277419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.287000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.287170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.296501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.296666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.306217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.306346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.316039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.316170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.330526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.330705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.341868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.342009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.350249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.350377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.361957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.362148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.945 [2024-05-15 09:11:00.377160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.945 [2024-05-15 09:11:00.377377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.393388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.393586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.409185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.409375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.418493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.418710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.434200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.434361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.451379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.451538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.466217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.466412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.482994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.483154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.499426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.499614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.510471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.510633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.518887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.519023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.528604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.528756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.538010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.538171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.547408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.547598] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.557360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.557513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.566952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.567101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.576322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.576462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.585712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.585840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.594962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.595087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.605022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.605170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.620913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.621050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.204 [2024-05-15 09:11:00.637584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.204 [2024-05-15 09:11:00.637746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.654754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.654907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.672018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.672259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.688747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.688987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.706095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.706286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.721826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.722022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.739291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.739474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.756005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.756169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.774325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.774518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.795034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.795295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.806181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.806452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.822975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.823200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.834434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.834710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.851472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.851884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.866398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.866774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.884605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.884896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.462 [2024-05-15 09:11:00.894733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.462 [2024-05-15 09:11:00.894969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.719 [2024-05-15 09:11:00.908395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.719 [2024-05-15 09:11:00.908671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.719 [2024-05-15 09:11:00.923912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.719 [2024-05-15 09:11:00.924145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.719 [2024-05-15 09:11:00.935191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.719 [2024-05-15 09:11:00.935443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.719 [2024-05-15 09:11:00.952503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.719 [2024-05-15 09:11:00.952771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.719 [2024-05-15 09:11:00.968137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.719 [2024-05-15 09:11:00.968404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.719 [2024-05-15 09:11:00.985672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.719 [2024-05-15 09:11:00.985989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.001284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.001590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.018610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.018874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.033637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.033848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.049554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.049921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.065763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.066055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.083033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.083317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.099295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.099583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.115159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.115368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.127443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.127730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.144530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.144809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.720 [2024-05-15 09:11:01.160913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.720 [2024-05-15 09:11:01.161143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.178254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.178489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.201893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.202095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.217381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.217701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.234765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.235073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.252638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.252893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.268164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.268376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.284503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.284826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.296569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.296860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.313466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.313860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.331020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.331252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.347148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.347387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.358485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.358817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.375074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.375396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.390786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.391025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.407631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.407892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.978 [2024-05-15 09:11:01.422721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.978 [2024-05-15 09:11:01.422962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.432380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.432677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.448090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.448350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.460483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.460789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.477848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.478103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.492351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.492652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.507840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.508067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.524750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.525013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.540671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.540936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.558956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.559218] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.573707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.573983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.589685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.589948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.606716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.606979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.623938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.624206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.638902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.639125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.648240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.648471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.662415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.662693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.237 [2024-05-15 09:11:01.671384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.237 [2024-05-15 09:11:01.671580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.687975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.688238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.705447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.705721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.720024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.720291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.737813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.738075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.752815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.753072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.768221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.768508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.785726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.785974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.800751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.801014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.816555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.816781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.834854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.835094] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.849490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.849760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.865629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.865852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.882231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.882491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.898474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.898742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.916923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.917157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.495 [2024-05-15 09:11:01.932263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.495 [2024-05-15 09:11:01.932505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:01.948867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:01.949099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:01.967882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:01.968117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:01.982366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:01.982624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:01.998820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:01.999076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.015575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.015857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.026027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.026276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.040231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.040500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.056293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.056583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.072818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.073071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.089629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.089941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.106574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.106821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.123119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.123374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.139944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.140192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.157190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.157478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.171689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.171955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.754 [2024-05-15 09:11:02.187279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.754 [2024-05-15 09:11:02.187497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.204851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.205137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.220626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.220916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.237253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.237525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.246564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.246804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.261194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.261484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.278883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.279116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.293903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.294124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.305821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.306016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.320471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.320753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.336080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.336294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.353199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.353434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.369220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.369427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.386705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.386933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.402017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.402264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.413667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.413916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.428108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.428347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.012 [2024-05-15 09:11:02.443876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.012 [2024-05-15 09:11:02.444152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.013 [2024-05-15 09:11:02.458721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.458979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.471584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.471785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.487185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.487428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.503269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.503575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.513669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.513905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.528594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.528803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.545351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.545627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.561365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.561664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.570950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.571219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.586773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.587052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.597214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.597469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.608107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.608347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.624928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.625199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.641587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.641865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.657468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.657751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.667822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.668075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.682355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.682645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.270 [2024-05-15 09:11:02.698139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.270 [2024-05-15 09:11:02.698395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.716594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.716848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.732429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.732715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.742228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.742511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.757281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.757558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.769314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.769622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.786316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.786618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.801320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.801582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.810664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.810900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.823222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.823401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.835146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.835378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.852110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.852356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.868061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.868302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.885244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.885499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.901853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.902104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.919247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.919479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.935828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.936076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.950839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.951095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.528 [2024-05-15 09:11:02.966034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.528 [2024-05-15 09:11:02.966253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:02.980443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:02.980690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:02.995216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:02.995486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.010156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.010409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.031994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.032237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.046168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.046479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.061626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.061894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.076700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.076944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.092049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.092311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.108838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.109048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.124523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.124792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.141308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.141577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.158274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.158557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.173447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.173728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.188591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.188835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.204341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.204553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.786 [2024-05-15 09:11:03.222415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:50.786 [2024-05-15 09:11:03.222684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.236902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.237165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.253584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.253830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.271877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.272106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.286061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.286295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.302241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.302494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.320337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.320597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.336339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.336601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.353761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.353982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.370754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.371004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.388010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.388235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.404984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.405186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.421869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.422097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.438334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.438533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.454853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.455024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.464358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.464565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.045 [2024-05-15 09:11:03.479580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.045 [2024-05-15 09:11:03.479792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.495427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.495710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.511262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.511532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.520155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.520335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.535745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.536057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.552474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.552691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.567482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.567745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.584575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.584813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.598335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.598567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.615949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.616211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.629729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.629921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.646291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.646583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.661001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.661238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.676642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.676881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.693668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.693910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.710767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.711013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.726300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.726518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.304 [2024-05-15 09:11:03.743641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.304 [2024-05-15 09:11:03.743855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.562 [2024-05-15 09:11:03.758037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.562 [2024-05-15 09:11:03.758278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.562 [2024-05-15 09:11:03.774676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.562 [2024-05-15 09:11:03.774879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.562 [2024-05-15 09:11:03.791425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.791690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.801908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.802107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.817693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.817849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.835097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.835270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.844644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.844796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.854332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.854504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.863892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.864058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.877801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.877960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.886586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.886729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.900933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.901080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.909241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.909377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.925098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.925247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.933830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.933961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.951025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.951176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.962152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.962300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.977293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.977435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.563 [2024-05-15 09:11:03.994106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.563 [2024-05-15 09:11:03.994257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.010692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.010859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.027415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.027609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.043942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.044097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.060230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.060378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.076488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.076656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.092767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.092915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.102748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.102924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.112407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.112558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.126031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.126176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.141091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.141255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.152458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.152637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.169053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.169243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.178753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.178902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.192195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.192369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.201102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.201247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.214978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.215139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.224332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.224495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.238192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.238359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.253300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.253445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.821 [2024-05-15 09:11:04.264112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:51.821 [2024-05-15 09:11:04.264261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.279627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.279776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.290689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.290844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.307089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.307239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.322602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.322761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.333685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.333834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.341971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.342115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.356410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.356559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.372848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.372986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.387618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.387776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.402619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.402756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.417325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.417496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.434151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.434314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.449295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.449469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.460378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.460526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.476306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.476470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.493660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.493819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.079 [2024-05-15 09:11:04.510816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.079 [2024-05-15 09:11:04.510974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.527424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.527608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.544887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.545026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.561091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.561252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.577279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.577442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.595869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.596027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.609854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.609998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.626670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.626812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.642217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.642375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.660731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.660914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.670198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.670341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.683599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.683758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.691772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.691930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.706165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.706336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.726287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.726486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.742059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.742231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.760200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.760374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.337 [2024-05-15 09:11:04.775012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.337 [2024-05-15 09:11:04.775197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.791354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.791609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.808720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.808908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.819989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.820146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.829314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.829462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.843465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.843667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.859381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.859573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.597 [2024-05-15 09:11:04.876678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.597 [2024-05-15 09:11:04.876849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.893125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.893293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.910959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.911119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.927305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.927462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.944170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.944330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.960839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.960999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.977710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.977863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:04.989216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.989392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 00:17:52.598 Latency(us) 00:17:52.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.598 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:52.598 Nvme1n1 : 5.01 12413.80 96.98 0.00 0.00 10299.10 2512.21 24716.43 00:17:52.598 =================================================================================================================== 00:17:52.598 Total : 12413.80 96.98 0.00 0.00 10299.10 2512.21 24716.43 00:17:52.598 [2024-05-15 09:11:04.997212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:04.997387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:05.009232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:05.009386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:05.021232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:05.021446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.598 [2024-05-15 09:11:05.033227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.598 [2024-05-15 09:11:05.033400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.045246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.045432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.057235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.057416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.069238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.069416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.081236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.081404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.093256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.093453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.105244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.105412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.117241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.117384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.129237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.129384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.141277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.141457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.153255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.153413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.165259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.165405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.177264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.177408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.189268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.189428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.201285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.201473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.213278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.213450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 [2024-05-15 09:11:05.225274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.856 [2024-05-15 09:11:05.225410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.856 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67056) - No such process 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67056 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.856 delay0 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.856 09:11:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:53.114 [2024-05-15 09:11:05.429191] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:01.223 Initializing NVMe Controllers 00:18:01.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.223 Initialization complete. Launching workers. 00:18:01.223 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 28538 00:18:01.223 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28653, failed to submit 122 00:18:01.223 success 28568, unsuccess 85, failed 0 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.223 rmmod nvme_tcp 00:18:01.223 rmmod nvme_fabrics 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66901 ']' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66901 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 66901 ']' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 66901 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66901 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66901' 00:18:01.223 killing process with pid 66901 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 66901 00:18:01.223 [2024-05-15 09:11:12.574240] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 66901 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:01.223 00:18:01.223 real 0m25.973s 00:18:01.223 user 0m41.183s 00:18:01.223 sys 0m8.158s 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:01.223 09:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.223 ************************************ 00:18:01.223 END TEST nvmf_zcopy 00:18:01.223 ************************************ 00:18:01.223 09:11:12 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:01.223 09:11:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:01.223 09:11:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:01.223 09:11:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:01.224 ************************************ 00:18:01.224 START TEST nvmf_nmic 00:18:01.224 ************************************ 00:18:01.224 09:11:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:01.224 * Looking for test storage... 00:18:01.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:01.224 Cannot find device "nvmf_tgt_br" 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.224 Cannot find device "nvmf_tgt_br2" 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:01.224 Cannot find device "nvmf_tgt_br" 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:01.224 Cannot find device "nvmf_tgt_br2" 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:01.224 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:01.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:01.225 00:18:01.225 --- 10.0.0.2 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:01.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:01.225 00:18:01.225 --- 10.0.0.3 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:01.225 00:18:01.225 --- 10.0.0.1 ping statistics --- 00:18:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.225 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67393 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67393 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 67393 ']' 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:01.225 09:11:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.225 [2024-05-15 09:11:13.580789] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:18:01.225 [2024-05-15 09:11:13.581042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.483 [2024-05-15 09:11:13.733242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.483 [2024-05-15 09:11:13.854074] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.483 [2024-05-15 09:11:13.854533] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.483 [2024-05-15 09:11:13.854808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.483 [2024-05-15 09:11:13.855144] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.483 [2024-05-15 09:11:13.855347] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.483 [2024-05-15 09:11:13.855813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.483 [2024-05-15 09:11:13.855936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.483 [2024-05-15 09:11:13.855987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.483 [2024-05-15 09:11:13.856376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 [2024-05-15 09:11:14.628177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 Malloc0 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.417 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 [2024-05-15 09:11:14.714418] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:02.418 [2024-05-15 09:11:14.715024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:02.418 test case1: single bdev can't be used in multiple subsystems 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 [2024-05-15 09:11:14.750476] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:02.418 [2024-05-15 09:11:14.750871] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:02.418 [2024-05-15 09:11:14.751108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.418 request: 00:18:02.418 { 00:18:02.418 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:02.418 "namespace": { 00:18:02.418 "bdev_name": "Malloc0", 00:18:02.418 "no_auto_visible": false 00:18:02.418 }, 00:18:02.418 "method": "nvmf_subsystem_add_ns", 00:18:02.418 "req_id": 1 00:18:02.418 } 00:18:02.418 Got JSON-RPC error response 00:18:02.418 response: 00:18:02.418 { 00:18:02.418 "code": -32602, 00:18:02.418 "message": "Invalid parameters" 00:18:02.418 } 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:02.418 Adding namespace failed - expected result. 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:02.418 test case2: host connect to nvmf target in multiple paths 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 [2024-05-15 09:11:14.770660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.418 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:02.676 09:11:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:02.676 09:11:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:02.676 09:11:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:18:02.676 09:11:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.676 09:11:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:18:02.676 09:11:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:18:05.204 09:11:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:05.204 [global] 00:18:05.204 thread=1 00:18:05.204 invalidate=1 00:18:05.204 rw=write 00:18:05.204 time_based=1 00:18:05.204 runtime=1 00:18:05.204 ioengine=libaio 00:18:05.204 direct=1 00:18:05.204 bs=4096 00:18:05.204 iodepth=1 00:18:05.204 norandommap=0 00:18:05.204 numjobs=1 00:18:05.204 00:18:05.204 verify_dump=1 00:18:05.204 verify_backlog=512 00:18:05.204 verify_state_save=0 00:18:05.204 do_verify=1 00:18:05.204 verify=crc32c-intel 00:18:05.204 [job0] 00:18:05.204 filename=/dev/nvme0n1 00:18:05.204 Could not set queue depth (nvme0n1) 00:18:05.204 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.204 fio-3.35 00:18:05.204 Starting 1 thread 00:18:06.139 00:18:06.139 job0: (groupid=0, jobs=1): err= 0: pid=67485: Wed May 15 09:11:18 2024 00:18:06.139 read: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec) 00:18:06.139 slat (nsec): min=8709, max=73119, avg=11866.51, stdev=3781.21 00:18:06.139 clat (usec): min=116, max=715, avg=160.27, stdev=28.48 00:18:06.139 lat (usec): min=127, max=725, avg=172.13, stdev=28.96 00:18:06.139 clat percentiles (usec): 00:18:06.139 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:18:06.139 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:18:06.139 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 192], 00:18:06.139 | 99.00th=[ 225], 99.50th=[ 265], 99.90th=[ 545], 99.95th=[ 693], 00:18:06.139 | 99.99th=[ 717] 00:18:06.139 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:18:06.140 slat (usec): min=12, max=123, avg=17.83, stdev= 4.96 00:18:06.140 clat (usec): min=70, max=319, avg=94.66, stdev=14.96 00:18:06.140 lat (usec): min=85, max=349, avg=112.49, stdev=16.48 00:18:06.140 clat percentiles (usec): 00:18:06.140 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 85], 00:18:06.140 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 95], 00:18:06.140 | 70.00th=[ 98], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 119], 00:18:06.140 | 99.00th=[ 139], 99.50th=[ 149], 99.90th=[ 269], 99.95th=[ 306], 00:18:06.140 | 99.99th=[ 318] 00:18:06.140 bw ( KiB/s): min=15736, max=15736, per=100.00%, avg=15736.00, stdev= 0.00, samples=1 00:18:06.140 iops : min= 3934, max= 3934, avg=3934.00, stdev= 0.00, samples=1 00:18:06.140 lat (usec) : 100=38.33%, 250=61.25%, 500=0.34%, 750=0.07% 00:18:06.140 cpu : usr=1.90%, sys=8.80%, ctx=7002, majf=0, minf=2 00:18:06.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.140 issued rwts: total=3415,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.140 00:18:06.140 Run status group 0 (all jobs): 00:18:06.140 READ: bw=13.3MiB/s (14.0MB/s), 13.3MiB/s-13.3MiB/s (14.0MB/s-14.0MB/s), io=13.3MiB (14.0MB), run=1001-1001msec 00:18:06.140 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:18:06.140 00:18:06.140 Disk stats (read/write): 00:18:06.140 nvme0n1: ios=3121/3145, merge=0/0, ticks=510/318, in_queue=828, util=90.94% 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.140 rmmod nvme_tcp 00:18:06.140 rmmod nvme_fabrics 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67393 ']' 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67393 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 67393 ']' 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 67393 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 67393 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 67393' 00:18:06.140 killing process with pid 67393 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 67393 00:18:06.140 [2024-05-15 09:11:18.524109] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:06.140 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 67393 00:18:06.399 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.399 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.399 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.399 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.399 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.400 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.400 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.400 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.400 09:11:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:06.400 00:18:06.400 real 0m5.866s 00:18:06.400 user 0m18.042s 00:18:06.400 sys 0m2.547s 00:18:06.400 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:06.400 ************************************ 00:18:06.400 END TEST nvmf_nmic 00:18:06.400 ************************************ 00:18:06.400 09:11:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:06.660 09:11:18 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:06.660 09:11:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:06.660 09:11:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:06.660 09:11:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:06.660 ************************************ 00:18:06.660 START TEST nvmf_fio_target 00:18:06.660 ************************************ 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:06.660 * Looking for test storage... 00:18:06.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:06.660 09:11:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:06.660 Cannot find device "nvmf_tgt_br" 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.660 Cannot find device "nvmf_tgt_br2" 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:06.660 Cannot find device "nvmf_tgt_br" 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:06.660 Cannot find device "nvmf_tgt_br2" 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:06.660 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:06.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:06.919 00:18:06.919 --- 10.0.0.2 ping statistics --- 00:18:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.919 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:06.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:06.919 00:18:06.919 --- 10.0.0.3 ping statistics --- 00:18:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.919 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:06.919 00:18:06.919 --- 10.0.0.1 ping statistics --- 00:18:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.919 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.919 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67662 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67662 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 67662 ']' 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:07.177 09:11:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.177 [2024-05-15 09:11:19.441023] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:18:07.177 [2024-05-15 09:11:19.441125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.177 [2024-05-15 09:11:19.579381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.436 [2024-05-15 09:11:19.682368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.436 [2024-05-15 09:11:19.682658] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.436 [2024-05-15 09:11:19.682828] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.436 [2024-05-15 09:11:19.682931] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.436 [2024-05-15 09:11:19.682966] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.436 [2024-05-15 09:11:19.683158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.436 [2024-05-15 09:11:19.683207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.436 [2024-05-15 09:11:19.683260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.436 [2024-05-15 09:11:19.683262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.003 09:11:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:08.572 [2024-05-15 09:11:20.712902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.572 09:11:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.572 09:11:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:08.572 09:11:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.831 09:11:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:08.831 09:11:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:09.396 09:11:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:09.396 09:11:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:09.654 09:11:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:09.654 09:11:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:09.974 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.234 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:10.234 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.234 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:10.234 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.493 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:10.493 09:11:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:10.753 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:11.012 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:11.012 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.269 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:11.269 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:11.526 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.526 [2024-05-15 09:11:23.921181] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:11.526 [2024-05-15 09:11:23.921921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.526 09:11:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:11.783 09:11:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:12.039 09:11:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.296 09:11:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:12.296 09:11:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:18:12.297 09:11:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.297 09:11:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:18:12.297 09:11:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:18:12.297 09:11:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:18:14.205 09:11:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:14.205 [global] 00:18:14.205 thread=1 00:18:14.205 invalidate=1 00:18:14.205 rw=write 00:18:14.205 time_based=1 00:18:14.205 runtime=1 00:18:14.205 ioengine=libaio 00:18:14.205 direct=1 00:18:14.205 bs=4096 00:18:14.205 iodepth=1 00:18:14.205 norandommap=0 00:18:14.205 numjobs=1 00:18:14.205 00:18:14.205 verify_dump=1 00:18:14.205 verify_backlog=512 00:18:14.205 verify_state_save=0 00:18:14.205 do_verify=1 00:18:14.205 verify=crc32c-intel 00:18:14.205 [job0] 00:18:14.205 filename=/dev/nvme0n1 00:18:14.205 [job1] 00:18:14.205 filename=/dev/nvme0n2 00:18:14.205 [job2] 00:18:14.205 filename=/dev/nvme0n3 00:18:14.205 [job3] 00:18:14.205 filename=/dev/nvme0n4 00:18:14.462 Could not set queue depth (nvme0n1) 00:18:14.462 Could not set queue depth (nvme0n2) 00:18:14.462 Could not set queue depth (nvme0n3) 00:18:14.462 Could not set queue depth (nvme0n4) 00:18:14.462 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.462 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.462 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.463 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.463 fio-3.35 00:18:14.463 Starting 4 threads 00:18:15.836 00:18:15.836 job0: (groupid=0, jobs=1): err= 0: pid=67846: Wed May 15 09:11:27 2024 00:18:15.836 read: IOPS=2006, BW=8028KiB/s (8221kB/s)(8036KiB/1001msec) 00:18:15.836 slat (nsec): min=9131, max=71275, avg=13793.53, stdev=4452.39 00:18:15.836 clat (usec): min=175, max=519, avg=273.09, stdev=46.23 00:18:15.836 lat (usec): min=189, max=538, avg=286.88, stdev=47.48 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:18:15.836 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:18:15.836 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 359], 00:18:15.836 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 510], 99.95th=[ 519], 00:18:15.836 | 99.99th=[ 519] 00:18:15.836 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:15.836 slat (usec): min=15, max=115, avg=23.28, stdev= 8.84 00:18:15.836 clat (usec): min=92, max=818, avg=180.31, stdev=37.21 00:18:15.836 lat (usec): min=111, max=837, avg=203.59, stdev=38.80 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 102], 5.00th=[ 114], 10.00th=[ 125], 20.00th=[ 165], 00:18:15.836 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:18:15.836 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 221], 00:18:15.836 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 469], 99.95th=[ 791], 00:18:15.836 | 99.99th=[ 816] 00:18:15.836 bw ( KiB/s): min= 8192, max= 8192, per=20.21%, avg=8192.00, stdev= 0.00, samples=1 00:18:15.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:15.836 lat (usec) : 100=0.30%, 250=62.07%, 500=37.49%, 750=0.10%, 1000=0.05% 00:18:15.836 cpu : usr=1.80%, sys=6.00%, ctx=4057, majf=0, minf=3 00:18:15.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.836 issued rwts: total=2009,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.836 job1: (groupid=0, jobs=1): err= 0: pid=67847: Wed May 15 09:11:27 2024 00:18:15.836 read: IOPS=1952, BW=7808KiB/s (7996kB/s)(7816KiB/1001msec) 00:18:15.836 slat (nsec): min=9219, max=62103, avg=13375.44, stdev=3128.75 00:18:15.836 clat (usec): min=170, max=495, avg=268.90, stdev=32.32 00:18:15.836 lat (usec): min=187, max=536, avg=282.28, stdev=32.76 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:18:15.836 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:18:15.836 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 343], 00:18:15.836 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 461], 99.95th=[ 494], 00:18:15.836 | 99.99th=[ 494] 00:18:15.836 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:15.836 slat (usec): min=14, max=140, avg=22.82, stdev= 8.81 00:18:15.836 clat (usec): min=95, max=1043, avg=192.93, stdev=48.11 00:18:15.836 lat (usec): min=115, max=1067, avg=215.75, stdev=51.99 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 113], 5.00th=[ 124], 10.00th=[ 139], 20.00th=[ 172], 00:18:15.836 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:18:15.836 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 231], 95.00th=[ 297], 00:18:15.836 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 392], 99.95th=[ 725], 00:18:15.836 | 99.99th=[ 1045] 00:18:15.836 bw ( KiB/s): min= 8192, max= 8192, per=20.21%, avg=8192.00, stdev= 0.00, samples=1 00:18:15.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:15.836 lat (usec) : 100=0.05%, 250=58.97%, 500=40.93%, 750=0.02% 00:18:15.836 lat (msec) : 2=0.02% 00:18:15.836 cpu : usr=1.90%, sys=5.60%, ctx=4004, majf=0, minf=11 00:18:15.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.836 issued rwts: total=1954,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.836 job2: (groupid=0, jobs=1): err= 0: pid=67848: Wed May 15 09:11:27 2024 00:18:15.836 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:18:15.836 slat (nsec): min=11386, max=55853, avg=13316.39, stdev=2494.35 00:18:15.836 clat (usec): min=145, max=428, avg=185.11, stdev=22.24 00:18:15.836 lat (usec): min=158, max=442, avg=198.42, stdev=22.52 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:18:15.836 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:18:15.836 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 223], 00:18:15.836 | 99.00th=[ 247], 99.50th=[ 265], 99.90th=[ 379], 99.95th=[ 392], 00:18:15.836 | 99.99th=[ 429] 00:18:15.836 write: IOPS=2973, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:18:15.836 slat (usec): min=14, max=413, avg=21.40, stdev=10.46 00:18:15.836 clat (usec): min=3, max=756, avg=141.00, stdev=28.44 00:18:15.836 lat (usec): min=116, max=775, avg=162.40, stdev=30.18 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 124], 00:18:15.836 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:18:15.836 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 176], 00:18:15.836 | 99.00th=[ 200], 99.50th=[ 255], 99.90th=[ 553], 99.95th=[ 603], 00:18:15.836 | 99.99th=[ 758] 00:18:15.836 bw ( KiB/s): min=12288, max=12288, per=30.31%, avg=12288.00, stdev= 0.00, samples=1 00:18:15.836 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:15.836 lat (usec) : 4=0.02%, 100=0.07%, 250=99.26%, 500=0.60%, 750=0.04% 00:18:15.836 lat (usec) : 1000=0.02% 00:18:15.836 cpu : usr=2.40%, sys=7.80%, ctx=5569, majf=0, minf=4 00:18:15.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.836 issued rwts: total=2560,2976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.836 job3: (groupid=0, jobs=1): err= 0: pid=67849: Wed May 15 09:11:27 2024 00:18:15.836 read: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:18:15.836 slat (nsec): min=8776, max=68125, avg=11376.69, stdev=2959.17 00:18:15.836 clat (usec): min=139, max=234, avg=175.02, stdev=13.46 00:18:15.836 lat (usec): min=150, max=243, avg=186.40, stdev=14.41 00:18:15.836 clat percentiles (usec): 00:18:15.836 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:18:15.836 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:18:15.836 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:18:15.836 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 227], 99.95th=[ 233], 00:18:15.836 | 99.99th=[ 235] 00:18:15.836 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:18:15.837 slat (usec): min=10, max=138, avg=16.44, stdev= 4.32 00:18:15.837 clat (usec): min=89, max=365, avg=129.11, stdev=15.30 00:18:15.837 lat (usec): min=103, max=380, avg=145.55, stdev=16.75 00:18:15.837 clat percentiles (usec): 00:18:15.837 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 117], 00:18:15.837 | 30.00th=[ 121], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 133], 00:18:15.837 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:18:15.837 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 198], 99.95th=[ 273], 00:18:15.837 | 99.99th=[ 367] 00:18:15.837 bw ( KiB/s): min=12288, max=12288, per=30.31%, avg=12288.00, stdev= 0.00, samples=1 00:18:15.837 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:15.837 lat (usec) : 100=0.22%, 250=99.75%, 500=0.03% 00:18:15.837 cpu : usr=1.50%, sys=7.10%, ctx=6002, majf=0, minf=17 00:18:15.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.837 issued rwts: total=2926,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.837 00:18:15.837 Run status group 0 (all jobs): 00:18:15.837 READ: bw=36.9MiB/s (38.7MB/s), 7808KiB/s-11.4MiB/s (7996kB/s-12.0MB/s), io=36.9MiB (38.7MB), run=1001-1001msec 00:18:15.837 WRITE: bw=39.6MiB/s (41.5MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.6MiB (41.5MB), run=1001-1001msec 00:18:15.837 00:18:15.837 Disk stats (read/write): 00:18:15.837 nvme0n1: ios=1585/1948, merge=0/0, ticks=448/369, in_queue=817, util=85.73% 00:18:15.837 nvme0n2: ios=1536/1805, merge=0/0, ticks=419/368, in_queue=787, util=86.17% 00:18:15.837 nvme0n3: ios=2063/2560, merge=0/0, ticks=381/390, in_queue=771, util=88.72% 00:18:15.837 nvme0n4: ios=2464/2560, merge=0/0, ticks=436/350, in_queue=786, util=89.57% 00:18:15.837 09:11:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:15.837 [global] 00:18:15.837 thread=1 00:18:15.837 invalidate=1 00:18:15.837 rw=randwrite 00:18:15.837 time_based=1 00:18:15.837 runtime=1 00:18:15.837 ioengine=libaio 00:18:15.837 direct=1 00:18:15.837 bs=4096 00:18:15.837 iodepth=1 00:18:15.837 norandommap=0 00:18:15.837 numjobs=1 00:18:15.837 00:18:15.837 verify_dump=1 00:18:15.837 verify_backlog=512 00:18:15.837 verify_state_save=0 00:18:15.837 do_verify=1 00:18:15.837 verify=crc32c-intel 00:18:15.837 [job0] 00:18:15.837 filename=/dev/nvme0n1 00:18:15.837 [job1] 00:18:15.837 filename=/dev/nvme0n2 00:18:15.837 [job2] 00:18:15.837 filename=/dev/nvme0n3 00:18:15.837 [job3] 00:18:15.837 filename=/dev/nvme0n4 00:18:15.837 Could not set queue depth (nvme0n1) 00:18:15.837 Could not set queue depth (nvme0n2) 00:18:15.837 Could not set queue depth (nvme0n3) 00:18:15.837 Could not set queue depth (nvme0n4) 00:18:15.837 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.837 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.837 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.837 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.837 fio-3.35 00:18:15.837 Starting 4 threads 00:18:17.209 00:18:17.209 job0: (groupid=0, jobs=1): err= 0: pid=67902: Wed May 15 09:11:29 2024 00:18:17.209 read: IOPS=1659, BW=6637KiB/s (6797kB/s)(6644KiB/1001msec) 00:18:17.209 slat (nsec): min=8967, max=67674, avg=13332.37, stdev=4879.71 00:18:17.209 clat (usec): min=169, max=1090, avg=295.87, stdev=49.83 00:18:17.209 lat (usec): min=179, max=1103, avg=309.21, stdev=50.57 00:18:17.209 clat percentiles (usec): 00:18:17.209 | 1.00th=[ 231], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:18:17.210 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:18:17.210 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 375], 95.00th=[ 396], 00:18:17.210 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 545], 99.95th=[ 1090], 00:18:17.210 | 99.99th=[ 1090] 00:18:17.210 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:17.210 slat (usec): min=14, max=114, avg=22.67, stdev= 9.44 00:18:17.210 clat (usec): min=100, max=409, avg=211.79, stdev=61.65 00:18:17.210 lat (usec): min=123, max=455, avg=234.46, stdev=66.07 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 131], 20.00th=[ 159], 00:18:17.210 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:18:17.210 | 70.00th=[ 221], 80.00th=[ 237], 90.00th=[ 322], 95.00th=[ 343], 00:18:17.210 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 400], 99.95th=[ 408], 00:18:17.210 | 99.99th=[ 408] 00:18:17.210 bw ( KiB/s): min= 8192, max= 8192, per=19.68%, avg=8192.00, stdev= 0.00, samples=1 00:18:17.210 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:17.210 lat (usec) : 250=48.34%, 500=51.58%, 750=0.05% 00:18:17.210 lat (msec) : 2=0.03% 00:18:17.210 cpu : usr=1.60%, sys=5.60%, ctx=3710, majf=0, minf=13 00:18:17.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 issued rwts: total=1661,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.210 job1: (groupid=0, jobs=1): err= 0: pid=67903: Wed May 15 09:11:29 2024 00:18:17.210 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:18:17.210 slat (nsec): min=9997, max=58170, avg=12431.23, stdev=2897.58 00:18:17.210 clat (usec): min=133, max=333, avg=162.92, stdev=13.90 00:18:17.210 lat (usec): min=144, max=348, avg=175.35, stdev=14.57 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:18:17.210 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:18:17.210 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 188], 00:18:17.210 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 249], 00:18:17.210 | 99.99th=[ 334] 00:18:17.210 write: IOPS=3243, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:18:17.210 slat (usec): min=12, max=120, avg=18.86, stdev= 5.56 00:18:17.210 clat (usec): min=84, max=9662, avg=120.46, stdev=183.94 00:18:17.210 lat (usec): min=101, max=9680, avg=139.31, stdev=184.19 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 93], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:18:17.210 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 119], 00:18:17.210 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 139], 00:18:17.210 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 383], 99.95th=[ 4293], 00:18:17.210 | 99.99th=[ 9634] 00:18:17.210 bw ( KiB/s): min=12288, max=12288, per=29.53%, avg=12288.00, stdev= 0.00, samples=1 00:18:17.210 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:17.210 lat (usec) : 100=4.38%, 250=95.52%, 500=0.05%, 1000=0.02% 00:18:17.210 lat (msec) : 10=0.03% 00:18:17.210 cpu : usr=2.10%, sys=8.20%, ctx=6321, majf=0, minf=7 00:18:17.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 issued rwts: total=3072,3247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.210 job2: (groupid=0, jobs=1): err= 0: pid=67904: Wed May 15 09:11:29 2024 00:18:17.210 read: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:18:17.210 slat (nsec): min=8555, max=72464, avg=12038.17, stdev=4077.04 00:18:17.210 clat (usec): min=141, max=334, avg=176.02, stdev=15.17 00:18:17.210 lat (usec): min=150, max=344, avg=188.05, stdev=16.46 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:18:17.210 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:18:17.210 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:18:17.210 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 289], 99.95th=[ 322], 00:18:17.210 | 99.99th=[ 334] 00:18:17.210 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:18:17.210 slat (usec): min=10, max=119, avg=18.28, stdev= 5.38 00:18:17.210 clat (usec): min=87, max=327, avg=132.29, stdev=14.21 00:18:17.210 lat (usec): min=107, max=347, avg=150.57, stdev=15.98 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 121], 00:18:17.210 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:18:17.210 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:18:17.210 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 231], 99.95th=[ 260], 00:18:17.210 | 99.99th=[ 330] 00:18:17.210 bw ( KiB/s): min=12288, max=12288, per=29.53%, avg=12288.00, stdev= 0.00, samples=1 00:18:17.210 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:17.210 lat (usec) : 100=0.08%, 250=99.83%, 500=0.08% 00:18:17.210 cpu : usr=1.70%, sys=7.90%, ctx=5889, majf=0, minf=13 00:18:17.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 issued rwts: total=2814,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.210 job3: (groupid=0, jobs=1): err= 0: pid=67905: Wed May 15 09:11:29 2024 00:18:17.210 read: IOPS=1817, BW=7269KiB/s (7443kB/s)(7276KiB/1001msec) 00:18:17.210 slat (usec): min=9, max=100, avg=15.27, stdev= 6.66 00:18:17.210 clat (usec): min=175, max=1082, avg=297.64, stdev=60.80 00:18:17.210 lat (usec): min=188, max=1094, avg=312.92, stdev=62.00 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 200], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 262], 00:18:17.210 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:18:17.210 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 371], 95.00th=[ 420], 00:18:17.210 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 627], 99.95th=[ 1090], 00:18:17.210 | 99.99th=[ 1090] 00:18:17.210 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:17.210 slat (usec): min=14, max=123, avg=22.27, stdev= 7.32 00:18:17.210 clat (usec): min=87, max=344, avg=184.91, stdev=37.64 00:18:17.210 lat (usec): min=118, max=431, avg=207.18, stdev=39.09 00:18:17.210 clat percentiles (usec): 00:18:17.210 | 1.00th=[ 110], 5.00th=[ 119], 10.00th=[ 125], 20.00th=[ 139], 00:18:17.210 | 30.00th=[ 176], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:18:17.210 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:18:17.210 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 302], 00:18:17.210 | 99.99th=[ 347] 00:18:17.210 bw ( KiB/s): min= 8192, max= 8192, per=19.68%, avg=8192.00, stdev= 0.00, samples=1 00:18:17.210 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:17.210 lat (usec) : 100=0.08%, 250=56.81%, 500=42.46%, 750=0.62% 00:18:17.210 lat (msec) : 2=0.03% 00:18:17.210 cpu : usr=2.00%, sys=5.50%, ctx=3875, majf=0, minf=12 00:18:17.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.210 issued rwts: total=1819,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.210 00:18:17.210 Run status group 0 (all jobs): 00:18:17.210 READ: bw=36.5MiB/s (38.3MB/s), 6637KiB/s-12.0MiB/s (6797kB/s-12.6MB/s), io=36.6MiB (38.4MB), run=1001-1001msec 00:18:17.210 WRITE: bw=40.6MiB/s (42.6MB/s), 8184KiB/s-12.7MiB/s (8380kB/s-13.3MB/s), io=40.7MiB (42.7MB), run=1001-1001msec 00:18:17.210 00:18:17.210 Disk stats (read/write): 00:18:17.210 nvme0n1: ios=1546/1536, merge=0/0, ticks=467/345, in_queue=812, util=85.66% 00:18:17.210 nvme0n2: ios=2608/2727, merge=0/0, ticks=442/337, in_queue=779, util=84.99% 00:18:17.210 nvme0n3: ios=2373/2560, merge=0/0, ticks=436/350, in_queue=786, util=88.68% 00:18:17.210 nvme0n4: ios=1536/1773, merge=0/0, ticks=446/340, in_queue=786, util=89.57% 00:18:17.210 09:11:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:17.210 [global] 00:18:17.210 thread=1 00:18:17.210 invalidate=1 00:18:17.210 rw=write 00:18:17.210 time_based=1 00:18:17.210 runtime=1 00:18:17.210 ioengine=libaio 00:18:17.210 direct=1 00:18:17.210 bs=4096 00:18:17.210 iodepth=128 00:18:17.210 norandommap=0 00:18:17.210 numjobs=1 00:18:17.210 00:18:17.210 verify_dump=1 00:18:17.210 verify_backlog=512 00:18:17.210 verify_state_save=0 00:18:17.210 do_verify=1 00:18:17.210 verify=crc32c-intel 00:18:17.210 [job0] 00:18:17.210 filename=/dev/nvme0n1 00:18:17.210 [job1] 00:18:17.210 filename=/dev/nvme0n2 00:18:17.210 [job2] 00:18:17.210 filename=/dev/nvme0n3 00:18:17.210 [job3] 00:18:17.210 filename=/dev/nvme0n4 00:18:17.210 Could not set queue depth (nvme0n1) 00:18:17.210 Could not set queue depth (nvme0n2) 00:18:17.210 Could not set queue depth (nvme0n3) 00:18:17.210 Could not set queue depth (nvme0n4) 00:18:17.210 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.210 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.210 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.210 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.210 fio-3.35 00:18:17.210 Starting 4 threads 00:18:18.583 00:18:18.583 job0: (groupid=0, jobs=1): err= 0: pid=67965: Wed May 15 09:11:30 2024 00:18:18.583 read: IOPS=4882, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:18:18.583 slat (usec): min=8, max=6403, avg=95.52, stdev=456.28 00:18:18.583 clat (usec): min=308, max=22168, avg=12588.05, stdev=1989.55 00:18:18.583 lat (usec): min=2821, max=22184, avg=12683.58, stdev=1948.62 00:18:18.583 clat percentiles (usec): 00:18:18.583 | 1.00th=[ 6063], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:18:18.583 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:18:18.583 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14484], 95.00th=[14746], 00:18:18.583 | 99.00th=[21365], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:18:18.583 | 99.99th=[22152] 00:18:18.583 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:18:18.583 slat (usec): min=8, max=4500, avg=95.63, stdev=440.36 00:18:18.583 clat (usec): min=8485, max=20781, avg=12644.11, stdev=2210.43 00:18:18.583 lat (usec): min=10497, max=20807, avg=12739.73, stdev=2182.67 00:18:18.583 clat percentiles (usec): 00:18:18.583 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11207], 20.00th=[11469], 00:18:18.583 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:18:18.583 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13829], 95.00th=[19006], 00:18:18.583 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:18:18.583 | 99.99th=[20841] 00:18:18.584 bw ( KiB/s): min=19904, max=21056, per=34.16%, avg=20480.00, stdev=814.59, samples=2 00:18:18.584 iops : min= 4976, max= 5264, avg=5120.00, stdev=203.65, samples=2 00:18:18.584 lat (usec) : 500=0.01% 00:18:18.584 lat (msec) : 4=0.32%, 10=2.59%, 20=95.13%, 50=1.96% 00:18:18.584 cpu : usr=4.59%, sys=13.47%, ctx=380, majf=0, minf=9 00:18:18.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:18.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.584 issued rwts: total=4897,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.584 job1: (groupid=0, jobs=1): err= 0: pid=67966: Wed May 15 09:11:30 2024 00:18:18.584 read: IOPS=2548, BW=9.95MiB/s (10.4MB/s)(9.98MiB/1003msec) 00:18:18.584 slat (usec): min=8, max=6274, avg=199.49, stdev=777.90 00:18:18.584 clat (usec): min=1559, max=33444, avg=25104.60, stdev=3232.31 00:18:18.584 lat (usec): min=2486, max=34124, avg=25304.09, stdev=3239.62 00:18:18.584 clat percentiles (usec): 00:18:18.584 | 1.00th=[ 8586], 5.00th=[21365], 10.00th=[22414], 20.00th=[24249], 00:18:18.584 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:18:18.584 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:18:18.584 | 99.00th=[31065], 99.50th=[32637], 99.90th=[33424], 99.95th=[33424], 00:18:18.584 | 99.99th=[33424] 00:18:18.584 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:18:18.584 slat (usec): min=5, max=9301, avg=182.13, stdev=912.07 00:18:18.584 clat (usec): min=16272, max=32892, avg=24274.27, stdev=1946.98 00:18:18.584 lat (usec): min=16289, max=32915, avg=24456.40, stdev=1935.73 00:18:18.584 clat percentiles (usec): 00:18:18.584 | 1.00th=[18744], 5.00th=[20579], 10.00th=[21627], 20.00th=[23462], 00:18:18.584 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:18:18.584 | 70.00th=[24773], 80.00th=[25560], 90.00th=[26346], 95.00th=[27132], 00:18:18.584 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31851], 99.95th=[31851], 00:18:18.584 | 99.99th=[32900] 00:18:18.584 bw ( KiB/s): min= 8944, max=11559, per=17.10%, avg=10251.50, stdev=1849.08, samples=2 00:18:18.584 iops : min= 2236, max= 2889, avg=2562.50, stdev=461.74, samples=2 00:18:18.584 lat (msec) : 2=0.02%, 4=0.04%, 10=0.66%, 20=2.66%, 50=96.62% 00:18:18.584 cpu : usr=3.19%, sys=6.69%, ctx=264, majf=0, minf=11 00:18:18.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:18.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.584 issued rwts: total=2556,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.584 job2: (groupid=0, jobs=1): err= 0: pid=67967: Wed May 15 09:11:30 2024 00:18:18.584 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:18:18.584 slat (usec): min=6, max=2979, avg=104.01, stdev=397.16 00:18:18.584 clat (usec): min=10605, max=16451, avg=13766.36, stdev=699.80 00:18:18.584 lat (usec): min=11429, max=16462, avg=13870.37, stdev=590.35 00:18:18.584 clat percentiles (usec): 00:18:18.584 | 1.00th=[11469], 5.00th=[12125], 10.00th=[13173], 20.00th=[13435], 00:18:18.584 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:18:18.584 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14353], 95.00th=[14615], 00:18:18.584 | 99.00th=[15139], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:18:18.584 | 99.99th=[16450] 00:18:18.584 write: IOPS=4793, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1003msec); 0 zone resets 00:18:18.584 slat (usec): min=5, max=3136, avg=101.48, stdev=468.63 00:18:18.584 clat (usec): min=286, max=16103, avg=13165.98, stdev=1277.64 00:18:18.584 lat (usec): min=3205, max=16118, avg=13267.46, stdev=1210.40 00:18:18.584 clat percentiles (usec): 00:18:18.584 | 1.00th=[ 6849], 5.00th=[11469], 10.00th=[12387], 20.00th=[12780], 00:18:18.584 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:18:18.584 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:18:18.584 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:18:18.584 | 99.99th=[16057] 00:18:18.584 bw ( KiB/s): min=17408, max=20032, per=31.22%, avg=18720.00, stdev=1855.45, samples=2 00:18:18.584 iops : min= 4352, max= 5008, avg=4680.00, stdev=463.86, samples=2 00:18:18.584 lat (usec) : 500=0.01% 00:18:18.584 lat (msec) : 4=0.34%, 10=0.59%, 20=99.05% 00:18:18.584 cpu : usr=3.99%, sys=10.58%, ctx=461, majf=0, minf=13 00:18:18.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:18.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.584 issued rwts: total=4608,4808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.584 job3: (groupid=0, jobs=1): err= 0: pid=67968: Wed May 15 09:11:30 2024 00:18:18.584 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(9.95MiB/1004msec) 00:18:18.584 slat (usec): min=8, max=5921, avg=193.61, stdev=748.44 00:18:18.584 clat (usec): min=2210, max=33651, avg=25249.51, stdev=3215.24 00:18:18.584 lat (usec): min=4002, max=34818, avg=25443.12, stdev=3222.66 00:18:18.584 clat percentiles (usec): 00:18:18.584 | 1.00th=[ 8848], 5.00th=[21627], 10.00th=[22938], 20.00th=[24249], 00:18:18.584 | 30.00th=[24773], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:18:18.584 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[27919], 00:18:18.584 | 99.00th=[31589], 99.50th=[31589], 99.90th=[33817], 99.95th=[33817], 00:18:18.584 | 99.99th=[33817] 00:18:18.584 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:18:18.584 slat (usec): min=5, max=9418, avg=188.90, stdev=932.46 00:18:18.584 clat (usec): min=17623, max=33235, avg=24269.98, stdev=1636.12 00:18:18.584 lat (usec): min=18101, max=33322, avg=24458.88, stdev=1664.85 00:18:18.584 clat percentiles (usec): 00:18:18.584 | 1.00th=[18744], 5.00th=[21365], 10.00th=[22414], 20.00th=[23462], 00:18:18.584 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24511], 60.00th=[24511], 00:18:18.584 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26346], 00:18:18.584 | 99.00th=[29230], 99.50th=[30540], 99.90th=[31851], 99.95th=[32113], 00:18:18.584 | 99.99th=[33162] 00:18:18.584 bw ( KiB/s): min= 8976, max=11504, per=17.08%, avg=10240.00, stdev=1787.57, samples=2 00:18:18.584 iops : min= 2244, max= 2876, avg=2560.00, stdev=446.89, samples=2 00:18:18.584 lat (msec) : 4=0.02%, 10=0.78%, 20=2.37%, 50=96.83% 00:18:18.584 cpu : usr=1.99%, sys=7.88%, ctx=279, majf=0, minf=12 00:18:18.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:18.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.584 issued rwts: total=2548,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.584 00:18:18.584 Run status group 0 (all jobs): 00:18:18.584 READ: bw=56.8MiB/s (59.6MB/s), 9.91MiB/s-19.1MiB/s (10.4MB/s-20.0MB/s), io=57.1MiB (59.8MB), run=1003-1004msec 00:18:18.584 WRITE: bw=58.5MiB/s (61.4MB/s), 9.96MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=58.8MiB (61.6MB), run=1003-1004msec 00:18:18.584 00:18:18.584 Disk stats (read/write): 00:18:18.584 nvme0n1: ios=4146/4352, merge=0/0, ticks=11510/12082, in_queue=23592, util=86.97% 00:18:18.584 nvme0n2: ios=2090/2318, merge=0/0, ticks=25339/25565, in_queue=50904, util=87.54% 00:18:18.584 nvme0n3: ios=3953/4096, merge=0/0, ticks=12945/11869, in_queue=24814, util=89.32% 00:18:18.584 nvme0n4: ios=2048/2290, merge=0/0, ticks=25513/25072, in_queue=50585, util=89.12% 00:18:18.584 09:11:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:18.584 [global] 00:18:18.584 thread=1 00:18:18.584 invalidate=1 00:18:18.584 rw=randwrite 00:18:18.584 time_based=1 00:18:18.584 runtime=1 00:18:18.584 ioengine=libaio 00:18:18.584 direct=1 00:18:18.584 bs=4096 00:18:18.584 iodepth=128 00:18:18.584 norandommap=0 00:18:18.584 numjobs=1 00:18:18.584 00:18:18.584 verify_dump=1 00:18:18.584 verify_backlog=512 00:18:18.584 verify_state_save=0 00:18:18.584 do_verify=1 00:18:18.584 verify=crc32c-intel 00:18:18.584 [job0] 00:18:18.584 filename=/dev/nvme0n1 00:18:18.584 [job1] 00:18:18.584 filename=/dev/nvme0n2 00:18:18.584 [job2] 00:18:18.584 filename=/dev/nvme0n3 00:18:18.584 [job3] 00:18:18.584 filename=/dev/nvme0n4 00:18:18.584 Could not set queue depth (nvme0n1) 00:18:18.584 Could not set queue depth (nvme0n2) 00:18:18.584 Could not set queue depth (nvme0n3) 00:18:18.584 Could not set queue depth (nvme0n4) 00:18:18.584 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.584 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.584 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.584 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.584 fio-3.35 00:18:18.584 Starting 4 threads 00:18:19.960 00:18:19.960 job0: (groupid=0, jobs=1): err= 0: pid=68026: Wed May 15 09:11:32 2024 00:18:19.960 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:18:19.960 slat (usec): min=8, max=3468, avg=91.00, stdev=354.12 00:18:19.960 clat (usec): min=9068, max=15935, avg=12205.36, stdev=869.37 00:18:19.960 lat (usec): min=9080, max=16349, avg=12296.36, stdev=914.42 00:18:19.960 clat percentiles (usec): 00:18:19.960 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11469], 20.00th=[11731], 00:18:19.960 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:18:19.960 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13566], 95.00th=[13960], 00:18:19.960 | 99.00th=[14877], 99.50th=[15401], 99.90th=[15795], 99.95th=[15926], 00:18:19.960 | 99.99th=[15926] 00:18:19.960 write: IOPS=5400, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1003msec); 0 zone resets 00:18:19.960 slat (usec): min=8, max=4118, avg=89.93, stdev=427.29 00:18:19.960 clat (usec): min=196, max=16905, avg=11867.01, stdev=1335.07 00:18:19.960 lat (usec): min=2583, max=16928, avg=11956.94, stdev=1391.27 00:18:19.960 clat percentiles (usec): 00:18:19.960 | 1.00th=[ 6980], 5.00th=[10552], 10.00th=[10945], 20.00th=[11207], 00:18:19.960 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:18:19.960 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13304], 95.00th=[13960], 00:18:19.960 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16188], 99.95th=[16319], 00:18:19.960 | 99.99th=[16909] 00:18:19.960 bw ( KiB/s): min=20568, max=21744, per=35.06%, avg=21156.00, stdev=831.56, samples=2 00:18:19.960 iops : min= 5142, max= 5436, avg=5289.00, stdev=207.89, samples=2 00:18:19.960 lat (usec) : 250=0.01% 00:18:19.960 lat (msec) : 4=0.40%, 10=1.94%, 20=97.66% 00:18:19.960 cpu : usr=5.29%, sys=14.37%, ctx=425, majf=0, minf=11 00:18:19.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:19.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.960 issued rwts: total=5120,5417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.960 job1: (groupid=0, jobs=1): err= 0: pid=68027: Wed May 15 09:11:32 2024 00:18:19.960 read: IOPS=2493, BW=9972KiB/s (10.2MB/s)(9.78MiB/1004msec) 00:18:19.960 slat (usec): min=8, max=8152, avg=202.59, stdev=968.16 00:18:19.960 clat (usec): min=2876, max=40209, avg=24899.29, stdev=4211.92 00:18:19.960 lat (usec): min=2895, max=40229, avg=25101.89, stdev=4135.83 00:18:19.960 clat percentiles (usec): 00:18:19.960 | 1.00th=[ 9372], 5.00th=[17433], 10.00th=[20055], 20.00th=[23200], 00:18:19.960 | 30.00th=[24511], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:18:19.960 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27919], 95.00th=[28705], 00:18:19.960 | 99.00th=[37487], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:18:19.960 | 99.99th=[40109] 00:18:19.960 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:18:19.960 slat (usec): min=5, max=6375, avg=182.15, stdev=912.55 00:18:19.960 clat (usec): min=12688, max=36934, avg=25041.64, stdev=2785.23 00:18:19.960 lat (usec): min=16522, max=36948, avg=25223.79, stdev=2633.03 00:18:19.960 clat percentiles (usec): 00:18:19.960 | 1.00th=[17433], 5.00th=[19268], 10.00th=[22152], 20.00th=[24511], 00:18:19.960 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:18:19.960 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[28443], 00:18:19.960 | 99.00th=[35390], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:18:19.960 | 99.99th=[36963] 00:18:19.960 bw ( KiB/s): min= 9464, max=11016, per=16.97%, avg=10240.00, stdev=1097.43, samples=2 00:18:19.960 iops : min= 2366, max= 2754, avg=2560.00, stdev=274.36, samples=2 00:18:19.960 lat (msec) : 4=0.14%, 10=0.63%, 20=8.69%, 50=90.54% 00:18:19.960 cpu : usr=3.19%, sys=7.18%, ctx=184, majf=0, minf=9 00:18:19.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:19.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.961 issued rwts: total=2503,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.961 job2: (groupid=0, jobs=1): err= 0: pid=68028: Wed May 15 09:11:32 2024 00:18:19.961 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:18:19.961 slat (usec): min=8, max=3228, avg=104.86, stdev=446.99 00:18:19.961 clat (usec): min=819, max=16858, avg=13792.34, stdev=1298.15 00:18:19.961 lat (usec): min=3105, max=16877, avg=13897.20, stdev=1224.55 00:18:19.961 clat percentiles (usec): 00:18:19.961 | 1.00th=[ 6783], 5.00th=[12125], 10.00th=[13173], 20.00th=[13566], 00:18:19.961 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:18:19.961 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:18:19.961 | 99.00th=[15270], 99.50th=[15401], 99.90th=[16909], 99.95th=[16909], 00:18:19.961 | 99.99th=[16909] 00:18:19.961 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:18:19.961 slat (usec): min=8, max=3304, avg=102.50, stdev=439.29 00:18:19.961 clat (usec): min=10170, max=16761, avg=13685.35, stdev=779.96 00:18:19.961 lat (usec): min=10334, max=16779, avg=13787.85, stdev=648.59 00:18:19.961 clat percentiles (usec): 00:18:19.961 | 1.00th=[10814], 5.00th=[12780], 10.00th=[13042], 20.00th=[13173], 00:18:19.961 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:18:19.961 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14746], 95.00th=[15008], 00:18:19.961 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16712], 99.95th=[16712], 00:18:19.961 | 99.99th=[16712] 00:18:19.961 bw ( KiB/s): min=17144, max=19759, per=30.58%, avg=18451.50, stdev=1849.08, samples=2 00:18:19.961 iops : min= 4286, max= 4939, avg=4612.50, stdev=461.74, samples=2 00:18:19.961 lat (usec) : 1000=0.01% 00:18:19.961 lat (msec) : 4=0.34%, 10=0.36%, 20=99.29% 00:18:19.961 cpu : usr=5.89%, sys=12.28%, ctx=457, majf=0, minf=11 00:18:19.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:19.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.961 issued rwts: total=4608,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.961 job3: (groupid=0, jobs=1): err= 0: pid=68029: Wed May 15 09:11:32 2024 00:18:19.961 read: IOPS=2393, BW=9575KiB/s (9805kB/s)(9604KiB/1003msec) 00:18:19.961 slat (usec): min=6, max=11898, avg=204.02, stdev=832.68 00:18:19.961 clat (usec): min=895, max=36270, avg=26135.63, stdev=4182.79 00:18:19.961 lat (usec): min=3062, max=36287, avg=26339.66, stdev=4109.27 00:18:19.961 clat percentiles (usec): 00:18:19.961 | 1.00th=[ 6652], 5.00th=[22152], 10.00th=[23987], 20.00th=[25560], 00:18:19.961 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:18:19.961 | 70.00th=[26346], 80.00th=[27132], 90.00th=[30278], 95.00th=[33162], 00:18:19.961 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:18:19.961 | 99.99th=[36439] 00:18:19.961 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:18:19.961 slat (usec): min=5, max=6189, avg=189.70, stdev=925.43 00:18:19.961 clat (usec): min=15301, max=28988, avg=24713.82, stdev=1780.19 00:18:19.961 lat (usec): min=19408, max=29002, avg=24903.52, stdev=1559.95 00:18:19.961 clat percentiles (usec): 00:18:19.961 | 1.00th=[19530], 5.00th=[19792], 10.00th=[22152], 20.00th=[24511], 00:18:19.961 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:18:19.961 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26084], 95.00th=[26346], 00:18:19.961 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28967], 99.95th=[28967], 00:18:19.961 | 99.99th=[28967] 00:18:19.961 bw ( KiB/s): min= 9640, max=10818, per=16.95%, avg=10229.00, stdev=832.97, samples=2 00:18:19.961 iops : min= 2410, max= 2704, avg=2557.00, stdev=207.89, samples=2 00:18:19.961 lat (usec) : 1000=0.02% 00:18:19.961 lat (msec) : 4=0.28%, 10=0.56%, 20=4.19%, 50=94.94% 00:18:19.961 cpu : usr=3.69%, sys=6.69%, ctx=259, majf=0, minf=19 00:18:19.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:19.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.961 issued rwts: total=2401,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.961 00:18:19.961 Run status group 0 (all jobs): 00:18:19.961 READ: bw=56.9MiB/s (59.7MB/s), 9575KiB/s-19.9MiB/s (9805kB/s-20.9MB/s), io=57.2MiB (59.9MB), run=1003-1004msec 00:18:19.961 WRITE: bw=58.9MiB/s (61.8MB/s), 9.96MiB/s-21.1MiB/s (10.4MB/s-22.1MB/s), io=59.2MiB (62.0MB), run=1003-1004msec 00:18:19.961 00:18:19.961 Disk stats (read/write): 00:18:19.961 nvme0n1: ios=4188/4608, merge=0/0, ticks=16009/15485, in_queue=31494, util=85.16% 00:18:19.961 nvme0n2: ios=2086/2225, merge=0/0, ticks=12555/12388, in_queue=24943, util=86.05% 00:18:19.961 nvme0n3: ios=3584/4095, merge=0/0, ticks=11393/12335, in_queue=23728, util=88.58% 00:18:19.961 nvme0n4: ios=2027/2048, merge=0/0, ticks=13012/11873, in_queue=24885, util=88.79% 00:18:19.961 09:11:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:19.961 09:11:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68042 00:18:19.961 09:11:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:19.961 09:11:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:19.961 [global] 00:18:19.961 thread=1 00:18:19.961 invalidate=1 00:18:19.961 rw=read 00:18:19.961 time_based=1 00:18:19.961 runtime=10 00:18:19.961 ioengine=libaio 00:18:19.961 direct=1 00:18:19.961 bs=4096 00:18:19.961 iodepth=1 00:18:19.961 norandommap=1 00:18:19.961 numjobs=1 00:18:19.961 00:18:19.961 [job0] 00:18:19.961 filename=/dev/nvme0n1 00:18:19.961 [job1] 00:18:19.961 filename=/dev/nvme0n2 00:18:19.961 [job2] 00:18:19.961 filename=/dev/nvme0n3 00:18:19.961 [job3] 00:18:19.961 filename=/dev/nvme0n4 00:18:19.961 Could not set queue depth (nvme0n1) 00:18:19.961 Could not set queue depth (nvme0n2) 00:18:19.961 Could not set queue depth (nvme0n3) 00:18:19.961 Could not set queue depth (nvme0n4) 00:18:20.219 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.219 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.219 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.219 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.219 fio-3.35 00:18:20.219 Starting 4 threads 00:18:23.503 09:11:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:23.503 fio: pid=68091, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.503 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=47144960, buflen=4096 00:18:23.503 09:11:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:23.503 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=70803456, buflen=4096 00:18:23.503 fio: pid=68090, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.503 09:11:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.503 09:11:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:23.762 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=57282560, buflen=4096 00:18:23.762 fio: pid=68083, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.762 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.762 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:24.021 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20041728, buflen=4096 00:18:24.021 fio: pid=68088, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:24.021 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.021 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:24.021 00:18:24.021 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68083: Wed May 15 09:11:36 2024 00:18:24.021 read: IOPS=4078, BW=15.9MiB/s (16.7MB/s)(54.6MiB/3429msec) 00:18:24.021 slat (usec): min=6, max=11678, avg=12.68, stdev=149.05 00:18:24.021 clat (usec): min=109, max=10432, avg=231.36, stdev=122.74 00:18:24.021 lat (usec): min=124, max=18198, avg=244.04, stdev=220.21 00:18:24.021 clat percentiles (usec): 00:18:24.021 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 161], 20.00th=[ 210], 00:18:24.021 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:18:24.021 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:18:24.021 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 330], 99.95th=[ 433], 00:18:24.021 | 99.99th=[ 8717] 00:18:24.021 bw ( KiB/s): min=15344, max=15832, per=22.40%, avg=15648.17, stdev=197.20, samples=6 00:18:24.021 iops : min= 3836, max= 3958, avg=3912.00, stdev=49.35, samples=6 00:18:24.021 lat (usec) : 250=68.69%, 500=31.27%, 750=0.01% 00:18:24.021 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01% 00:18:24.021 cpu : usr=0.90%, sys=4.11%, ctx=14002, majf=0, minf=1 00:18:24.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 issued rwts: total=13986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.021 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68088: Wed May 15 09:11:36 2024 00:18:24.021 read: IOPS=5800, BW=22.7MiB/s (23.8MB/s)(83.1MiB/3668msec) 00:18:24.021 slat (usec): min=7, max=9398, avg=12.98, stdev=139.32 00:18:24.021 clat (usec): min=2, max=9514, avg=158.29, stdev=113.01 00:18:24.021 lat (usec): min=114, max=9619, avg=171.27, stdev=179.76 00:18:24.021 clat percentiles (usec): 00:18:24.021 | 1.00th=[ 117], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 145], 00:18:24.021 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:18:24.021 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:18:24.021 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 273], 99.95th=[ 873], 00:18:24.021 | 99.99th=[ 7439] 00:18:24.021 bw ( KiB/s): min=21304, max=24518, per=33.19%, avg=23187.57, stdev=1121.17, samples=7 00:18:24.021 iops : min= 5326, max= 6129, avg=5796.71, stdev=280.31, samples=7 00:18:24.021 lat (usec) : 4=0.01%, 10=0.01%, 250=99.87%, 500=0.03%, 750=0.02% 00:18:24.021 lat (usec) : 1000=0.03% 00:18:24.021 lat (msec) : 2=0.01%, 4=0.01%, 10=0.02% 00:18:24.021 cpu : usr=1.39%, sys=5.75%, ctx=21312, majf=0, minf=1 00:18:24.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 issued rwts: total=21278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.021 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68090: Wed May 15 09:11:36 2024 00:18:24.021 read: IOPS=5373, BW=21.0MiB/s (22.0MB/s)(67.5MiB/3217msec) 00:18:24.021 slat (usec): min=7, max=12806, avg=12.34, stdev=126.14 00:18:24.021 clat (usec): min=4, max=2348, avg=172.75, stdev=36.31 00:18:24.021 lat (usec): min=123, max=13016, avg=185.10, stdev=131.65 00:18:24.021 clat percentiles (usec): 00:18:24.021 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:18:24.021 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:18:24.021 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:18:24.021 | 99.00th=[ 219], 99.50th=[ 273], 99.90th=[ 594], 99.95th=[ 783], 00:18:24.021 | 99.99th=[ 2212] 00:18:24.021 bw ( KiB/s): min=20784, max=22520, per=30.81%, avg=21522.33, stdev=798.79, samples=6 00:18:24.021 iops : min= 5196, max= 5630, avg=5380.50, stdev=199.78, samples=6 00:18:24.021 lat (usec) : 10=0.01%, 250=99.45%, 500=0.36%, 750=0.12%, 1000=0.04% 00:18:24.021 lat (msec) : 2=0.01%, 4=0.01% 00:18:24.021 cpu : usr=1.34%, sys=5.57%, ctx=17350, majf=0, minf=1 00:18:24.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 issued rwts: total=17287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.021 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68091: Wed May 15 09:11:36 2024 00:18:24.021 read: IOPS=3917, BW=15.3MiB/s (16.0MB/s)(45.0MiB/2938msec) 00:18:24.021 slat (nsec): min=6203, max=72737, avg=9057.35, stdev=2492.25 00:18:24.021 clat (usec): min=175, max=412, avg=245.20, stdev=20.41 00:18:24.021 lat (usec): min=182, max=420, avg=254.26, stdev=21.34 00:18:24.021 clat percentiles (usec): 00:18:24.021 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:18:24.021 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:18:24.021 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:18:24.021 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 343], 00:18:24.021 | 99.99th=[ 359] 00:18:24.021 bw ( KiB/s): min=15344, max=15832, per=22.45%, avg=15684.80, stdev=196.33, samples=5 00:18:24.021 iops : min= 3836, max= 3958, avg=3921.20, stdev=49.08, samples=5 00:18:24.021 lat (usec) : 250=60.97%, 500=39.02% 00:18:24.021 cpu : usr=0.92%, sys=3.44%, ctx=11514, majf=0, minf=2 00:18:24.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.021 issued rwts: total=11511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.021 00:18:24.021 Run status group 0 (all jobs): 00:18:24.021 READ: bw=68.2MiB/s (71.5MB/s), 15.3MiB/s-22.7MiB/s (16.0MB/s-23.8MB/s), io=250MiB (262MB), run=2938-3668msec 00:18:24.021 00:18:24.021 Disk stats (read/write): 00:18:24.021 nvme0n1: ios=13592/0, merge=0/0, ticks=3104/0, in_queue=3104, util=94.90% 00:18:24.021 nvme0n2: ios=20919/0, merge=0/0, ticks=3271/0, in_queue=3271, util=94.61% 00:18:24.021 nvme0n3: ios=16676/0, merge=0/0, ticks=2906/0, in_queue=2906, util=96.11% 00:18:24.021 nvme0n4: ios=11208/0, merge=0/0, ticks=2681/0, in_queue=2681, util=96.82% 00:18:24.279 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.279 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:24.569 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.569 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:24.569 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.569 09:11:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:24.826 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.826 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68042 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:25.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:25.084 nvmf hotplug test: fio failed as expected 00:18:25.084 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.342 rmmod nvme_tcp 00:18:25.342 rmmod nvme_fabrics 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67662 ']' 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67662 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 67662 ']' 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 67662 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:25.342 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 67662 00:18:25.600 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:25.600 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:25.600 killing process with pid 67662 00:18:25.600 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 67662' 00:18:25.600 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 67662 00:18:25.601 [2024-05-15 09:11:37.801900] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:25.601 09:11:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 67662 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.601 09:11:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.859 09:11:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:25.859 00:18:25.859 real 0m19.199s 00:18:25.859 user 1m11.901s 00:18:25.859 sys 0m10.401s 00:18:25.859 09:11:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:25.859 ************************************ 00:18:25.859 09:11:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.859 END TEST nvmf_fio_target 00:18:25.859 ************************************ 00:18:25.859 09:11:38 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:25.859 09:11:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:25.859 09:11:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:25.859 09:11:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.859 ************************************ 00:18:25.859 START TEST nvmf_bdevio 00:18:25.859 ************************************ 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:25.859 * Looking for test storage... 00:18:25.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.859 09:11:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:25.860 Cannot find device "nvmf_tgt_br" 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.860 Cannot find device "nvmf_tgt_br2" 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:18:25.860 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:26.119 Cannot find device "nvmf_tgt_br" 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:26.119 Cannot find device "nvmf_tgt_br2" 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:18:26.119 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.120 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:26.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:26.378 00:18:26.378 --- 10.0.0.2 ping statistics --- 00:18:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.378 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:26.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:26.378 00:18:26.378 --- 10.0.0.3 ping statistics --- 00:18:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.378 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:26.378 00:18:26.378 --- 10.0.0.1 ping statistics --- 00:18:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.378 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68354 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68354 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 68354 ']' 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:26.378 09:11:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:26.378 [2024-05-15 09:11:38.680068] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:18:26.378 [2024-05-15 09:11:38.680153] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.378 [2024-05-15 09:11:38.816115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.636 [2024-05-15 09:11:38.915154] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.636 [2024-05-15 09:11:38.915201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.636 [2024-05-15 09:11:38.915211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.636 [2024-05-15 09:11:38.915221] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.636 [2024-05-15 09:11:38.915229] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.636 [2024-05-15 09:11:38.915392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:26.636 [2024-05-15 09:11:38.915630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:26.636 [2024-05-15 09:11:38.915686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:26.636 [2024-05-15 09:11:38.916178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 [2024-05-15 09:11:39.730869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 Malloc0 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 [2024-05-15 09:11:39.786746] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:27.586 [2024-05-15 09:11:39.787335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:27.586 { 00:18:27.586 "params": { 00:18:27.586 "name": "Nvme$subsystem", 00:18:27.586 "trtype": "$TEST_TRANSPORT", 00:18:27.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:27.586 "adrfam": "ipv4", 00:18:27.586 "trsvcid": "$NVMF_PORT", 00:18:27.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:27.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:27.586 "hdgst": ${hdgst:-false}, 00:18:27.586 "ddgst": ${ddgst:-false} 00:18:27.586 }, 00:18:27.586 "method": "bdev_nvme_attach_controller" 00:18:27.586 } 00:18:27.586 EOF 00:18:27.586 )") 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:27.586 09:11:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:27.586 "params": { 00:18:27.586 "name": "Nvme1", 00:18:27.586 "trtype": "tcp", 00:18:27.586 "traddr": "10.0.0.2", 00:18:27.586 "adrfam": "ipv4", 00:18:27.586 "trsvcid": "4420", 00:18:27.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.586 "hdgst": false, 00:18:27.586 "ddgst": false 00:18:27.586 }, 00:18:27.586 "method": "bdev_nvme_attach_controller" 00:18:27.586 }' 00:18:27.586 [2024-05-15 09:11:39.846481] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:18:27.586 [2024-05-15 09:11:39.846799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68391 ] 00:18:27.586 [2024-05-15 09:11:39.994056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.845 [2024-05-15 09:11:40.114371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.845 [2024-05-15 09:11:40.114490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.845 [2024-05-15 09:11:40.114495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.845 I/O targets: 00:18:27.845 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:27.845 00:18:27.845 00:18:27.845 CUnit - A unit testing framework for C - Version 2.1-3 00:18:27.845 http://cunit.sourceforge.net/ 00:18:27.845 00:18:27.845 00:18:27.845 Suite: bdevio tests on: Nvme1n1 00:18:28.125 Test: blockdev write read block ...passed 00:18:28.125 Test: blockdev write zeroes read block ...passed 00:18:28.125 Test: blockdev write zeroes read no split ...passed 00:18:28.126 Test: blockdev write zeroes read split ...passed 00:18:28.126 Test: blockdev write zeroes read split partial ...passed 00:18:28.126 Test: blockdev reset ...[2024-05-15 09:11:40.320424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.126 [2024-05-15 09:11:40.320761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ae580 (9): Bad file descriptor 00:18:28.126 [2024-05-15 09:11:40.337379] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:28.126 passed 00:18:28.126 Test: blockdev write read 8 blocks ...passed 00:18:28.126 Test: blockdev write read size > 128k ...passed 00:18:28.126 Test: blockdev write read invalid size ...passed 00:18:28.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:28.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:28.126 Test: blockdev write read max offset ...passed 00:18:28.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:28.126 Test: blockdev writev readv 8 blocks ...passed 00:18:28.126 Test: blockdev writev readv 30 x 1block ...passed 00:18:28.126 Test: blockdev writev readv block ...passed 00:18:28.126 Test: blockdev writev readv size > 128k ...passed 00:18:28.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:28.126 Test: blockdev comparev and writev ...[2024-05-15 09:11:40.347411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.347627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.347741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.347839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.348451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.348599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.348807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.348922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.349409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.349537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.349679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.349883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.350266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.350385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.350491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.126 [2024-05-15 09:11:40.350594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:28.126 passed 00:18:28.126 Test: blockdev nvme passthru rw ...passed 00:18:28.126 Test: blockdev nvme passthru vendor specific ...[2024-05-15 09:11:40.351636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.126 [2024-05-15 09:11:40.351777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.351986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.126 [2024-05-15 09:11:40.352145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.352375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.126 [2024-05-15 09:11:40.352513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:28.126 [2024-05-15 09:11:40.352779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.126 [2024-05-15 09:11:40.352890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:28.126 passed 00:18:28.126 Test: blockdev nvme admin passthru ...passed 00:18:28.126 Test: blockdev copy ...passed 00:18:28.126 00:18:28.126 Run Summary: Type Total Ran Passed Failed Inactive 00:18:28.126 suites 1 1 n/a 0 0 00:18:28.126 tests 23 23 23 0 0 00:18:28.126 asserts 152 152 152 0 n/a 00:18:28.126 00:18:28.126 Elapsed time = 0.166 seconds 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.389 rmmod nvme_tcp 00:18:28.389 rmmod nvme_fabrics 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68354 ']' 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68354 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 68354 ']' 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 68354 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 68354 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 68354' 00:18:28.389 killing process with pid 68354 00:18:28.389 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 68354 00:18:28.389 [2024-05-15 09:11:40.722158] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 68354 00:18:28.389 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.648 09:11:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.648 09:11:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:28.648 00:18:28.648 real 0m2.910s 00:18:28.648 user 0m9.496s 00:18:28.648 sys 0m0.794s 00:18:28.648 09:11:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:28.648 09:11:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:28.648 ************************************ 00:18:28.648 END TEST nvmf_bdevio 00:18:28.648 ************************************ 00:18:28.648 09:11:41 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:28.648 09:11:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:28.648 09:11:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:28.648 09:11:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.648 ************************************ 00:18:28.648 START TEST nvmf_auth_target 00:18:28.648 ************************************ 00:18:28.648 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:28.907 * Looking for test storage... 00:18:28.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:28.907 Cannot find device "nvmf_tgt_br" 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.907 Cannot find device "nvmf_tgt_br2" 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:28.907 Cannot find device "nvmf_tgt_br" 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:28.907 Cannot find device "nvmf_tgt_br2" 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:28.907 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:29.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:18:29.167 00:18:29.167 --- 10.0.0.2 ping statistics --- 00:18:29.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.167 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:29.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:29.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:18:29.167 00:18:29.167 --- 10.0.0.3 ping statistics --- 00:18:29.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.167 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:29.167 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:29.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:29.426 00:18:29.426 --- 10.0.0.1 ping statistics --- 00:18:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.426 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68562 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68562 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 68562 ']' 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:29.426 09:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.361 09:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:30.361 09:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:30.361 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.361 09:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:30.361 09:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=68600 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6ff5ee9a006849bf0dc49e50418d6153d71c534faf78e422 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Q3G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6ff5ee9a006849bf0dc49e50418d6153d71c534faf78e422 0 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6ff5ee9a006849bf0dc49e50418d6153d71c534faf78e422 0 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6ff5ee9a006849bf0dc49e50418d6153d71c534faf78e422 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Q3G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Q3G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.Q3G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8448434c19e3402c59455dc7a7b71588 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.j4G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8448434c19e3402c59455dc7a7b71588 1 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8448434c19e3402c59455dc7a7b71588 1 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8448434c19e3402c59455dc7a7b71588 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.j4G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.j4G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.j4G 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be2abb22a56c93ed555abcc40f4fdaf1cc4d0c92af6ada0e 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aSv 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be2abb22a56c93ed555abcc40f4fdaf1cc4d0c92af6ada0e 2 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be2abb22a56c93ed555abcc40f4fdaf1cc4d0c92af6ada0e 2 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be2abb22a56c93ed555abcc40f4fdaf1cc4d0c92af6ada0e 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:30.620 09:11:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aSv 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aSv 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.aSv 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:30.620 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f1512ec881790438bf0ef325d1c25536a2395b1056ce6c76141107f6057d264 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pej 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f1512ec881790438bf0ef325d1c25536a2395b1056ce6c76141107f6057d264 3 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f1512ec881790438bf0ef325d1c25536a2395b1056ce6c76141107f6057d264 3 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f1512ec881790438bf0ef325d1c25536a2395b1056ce6c76141107f6057d264 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pej 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pej 00:18:30.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.Pej 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 68562 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 68562 ']' 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:30.880 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 68600 /var/tmp/host.sock 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 68600 ']' 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:31.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:31.143 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:31.144 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q3G 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Q3G 00:18:31.402 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Q3G 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.j4G 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.j4G 00:18:31.661 09:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.j4G 00:18:31.920 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:31.920 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aSv 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.aSv 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.aSv 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Pej 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.921 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Pej 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Pej 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.180 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:32.465 09:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:32.734 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:32.992 { 00:18:32.992 "cntlid": 1, 00:18:32.992 "qid": 0, 00:18:32.992 "state": "enabled", 00:18:32.992 "listen_address": { 00:18:32.992 "trtype": "TCP", 00:18:32.992 "adrfam": "IPv4", 00:18:32.992 "traddr": "10.0.0.2", 00:18:32.992 "trsvcid": "4420" 00:18:32.992 }, 00:18:32.992 "peer_address": { 00:18:32.992 "trtype": "TCP", 00:18:32.992 "adrfam": "IPv4", 00:18:32.992 "traddr": "10.0.0.1", 00:18:32.992 "trsvcid": "51964" 00:18:32.992 }, 00:18:32.992 "auth": { 00:18:32.992 "state": "completed", 00:18:32.992 "digest": "sha256", 00:18:32.992 "dhgroup": "null" 00:18:32.992 } 00:18:32.992 } 00:18:32.992 ]' 00:18:32.992 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.251 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.509 09:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:38.789 00:18:38.789 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:38.790 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.790 09:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:38.790 { 00:18:38.790 "cntlid": 3, 00:18:38.790 "qid": 0, 00:18:38.790 "state": "enabled", 00:18:38.790 "listen_address": { 00:18:38.790 "trtype": "TCP", 00:18:38.790 "adrfam": "IPv4", 00:18:38.790 "traddr": "10.0.0.2", 00:18:38.790 "trsvcid": "4420" 00:18:38.790 }, 00:18:38.790 "peer_address": { 00:18:38.790 "trtype": "TCP", 00:18:38.790 "adrfam": "IPv4", 00:18:38.790 "traddr": "10.0.0.1", 00:18:38.790 "trsvcid": "33198" 00:18:38.790 }, 00:18:38.790 "auth": { 00:18:38.790 "state": "completed", 00:18:38.790 "digest": "sha256", 00:18:38.790 "dhgroup": "null" 00:18:38.790 } 00:18:38.790 } 00:18:38.790 ]' 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.790 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.048 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:39.048 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.048 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.048 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.048 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.306 09:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:39.879 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:40.138 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:40.705 00:18:40.705 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.705 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:40.705 09:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:40.965 { 00:18:40.965 "cntlid": 5, 00:18:40.965 "qid": 0, 00:18:40.965 "state": "enabled", 00:18:40.965 "listen_address": { 00:18:40.965 "trtype": "TCP", 00:18:40.965 "adrfam": "IPv4", 00:18:40.965 "traddr": "10.0.0.2", 00:18:40.965 "trsvcid": "4420" 00:18:40.965 }, 00:18:40.965 "peer_address": { 00:18:40.965 "trtype": "TCP", 00:18:40.965 "adrfam": "IPv4", 00:18:40.965 "traddr": "10.0.0.1", 00:18:40.965 "trsvcid": "33228" 00:18:40.965 }, 00:18:40.965 "auth": { 00:18:40.965 "state": "completed", 00:18:40.965 "digest": "sha256", 00:18:40.965 "dhgroup": "null" 00:18:40.965 } 00:18:40.965 } 00:18:40.965 ]' 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.965 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.223 09:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.157 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.416 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.675 00:18:42.675 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:42.675 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:42.675 09:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.675 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.675 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.675 09:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.675 09:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.675 09:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:42.942 { 00:18:42.942 "cntlid": 7, 00:18:42.942 "qid": 0, 00:18:42.942 "state": "enabled", 00:18:42.942 "listen_address": { 00:18:42.942 "trtype": "TCP", 00:18:42.942 "adrfam": "IPv4", 00:18:42.942 "traddr": "10.0.0.2", 00:18:42.942 "trsvcid": "4420" 00:18:42.942 }, 00:18:42.942 "peer_address": { 00:18:42.942 "trtype": "TCP", 00:18:42.942 "adrfam": "IPv4", 00:18:42.942 "traddr": "10.0.0.1", 00:18:42.942 "trsvcid": "33250" 00:18:42.942 }, 00:18:42.942 "auth": { 00:18:42.942 "state": "completed", 00:18:42.942 "digest": "sha256", 00:18:42.942 "dhgroup": "null" 00:18:42.942 } 00:18:42.942 } 00:18:42.942 ]' 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.942 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.201 09:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.161 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.731 00:18:44.731 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:44.731 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.731 09:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.989 { 00:18:44.989 "cntlid": 9, 00:18:44.989 "qid": 0, 00:18:44.989 "state": "enabled", 00:18:44.989 "listen_address": { 00:18:44.989 "trtype": "TCP", 00:18:44.989 "adrfam": "IPv4", 00:18:44.989 "traddr": "10.0.0.2", 00:18:44.989 "trsvcid": "4420" 00:18:44.989 }, 00:18:44.989 "peer_address": { 00:18:44.989 "trtype": "TCP", 00:18:44.989 "adrfam": "IPv4", 00:18:44.989 "traddr": "10.0.0.1", 00:18:44.989 "trsvcid": "33278" 00:18:44.989 }, 00:18:44.989 "auth": { 00:18:44.989 "state": "completed", 00:18:44.989 "digest": "sha256", 00:18:44.989 "dhgroup": "ffdhe2048" 00:18:44.989 } 00:18:44.989 } 00:18:44.989 ]' 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.989 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.554 09:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.120 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:46.685 09:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:46.943 00:18:46.943 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:46.943 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:46.943 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.202 { 00:18:47.202 "cntlid": 11, 00:18:47.202 "qid": 0, 00:18:47.202 "state": "enabled", 00:18:47.202 "listen_address": { 00:18:47.202 "trtype": "TCP", 00:18:47.202 "adrfam": "IPv4", 00:18:47.202 "traddr": "10.0.0.2", 00:18:47.202 "trsvcid": "4420" 00:18:47.202 }, 00:18:47.202 "peer_address": { 00:18:47.202 "trtype": "TCP", 00:18:47.202 "adrfam": "IPv4", 00:18:47.202 "traddr": "10.0.0.1", 00:18:47.202 "trsvcid": "49456" 00:18:47.202 }, 00:18:47.202 "auth": { 00:18:47.202 "state": "completed", 00:18:47.202 "digest": "sha256", 00:18:47.202 "dhgroup": "ffdhe2048" 00:18:47.202 } 00:18:47.202 } 00:18:47.202 ]' 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.202 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.777 09:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.359 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:48.617 09:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:48.876 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.134 { 00:18:49.134 "cntlid": 13, 00:18:49.134 "qid": 0, 00:18:49.134 "state": "enabled", 00:18:49.134 "listen_address": { 00:18:49.134 "trtype": "TCP", 00:18:49.134 "adrfam": "IPv4", 00:18:49.134 "traddr": "10.0.0.2", 00:18:49.134 "trsvcid": "4420" 00:18:49.134 }, 00:18:49.134 "peer_address": { 00:18:49.134 "trtype": "TCP", 00:18:49.134 "adrfam": "IPv4", 00:18:49.134 "traddr": "10.0.0.1", 00:18:49.134 "trsvcid": "49490" 00:18:49.134 }, 00:18:49.134 "auth": { 00:18:49.134 "state": "completed", 00:18:49.134 "digest": "sha256", 00:18:49.134 "dhgroup": "ffdhe2048" 00:18:49.134 } 00:18:49.134 } 00:18:49.134 ]' 00:18:49.134 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.392 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.649 09:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.584 09:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.584 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.150 00:18:51.150 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:51.150 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.150 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.409 { 00:18:51.409 "cntlid": 15, 00:18:51.409 "qid": 0, 00:18:51.409 "state": "enabled", 00:18:51.409 "listen_address": { 00:18:51.409 "trtype": "TCP", 00:18:51.409 "adrfam": "IPv4", 00:18:51.409 "traddr": "10.0.0.2", 00:18:51.409 "trsvcid": "4420" 00:18:51.409 }, 00:18:51.409 "peer_address": { 00:18:51.409 "trtype": "TCP", 00:18:51.409 "adrfam": "IPv4", 00:18:51.409 "traddr": "10.0.0.1", 00:18:51.409 "trsvcid": "49512" 00:18:51.409 }, 00:18:51.409 "auth": { 00:18:51.409 "state": "completed", 00:18:51.409 "digest": "sha256", 00:18:51.409 "dhgroup": "ffdhe2048" 00:18:51.409 } 00:18:51.409 } 00:18:51.409 ]' 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.409 09:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.975 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.542 09:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:52.801 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:53.059 00:18:53.059 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.059 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.059 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:53.349 { 00:18:53.349 "cntlid": 17, 00:18:53.349 "qid": 0, 00:18:53.349 "state": "enabled", 00:18:53.349 "listen_address": { 00:18:53.349 "trtype": "TCP", 00:18:53.349 "adrfam": "IPv4", 00:18:53.349 "traddr": "10.0.0.2", 00:18:53.349 "trsvcid": "4420" 00:18:53.349 }, 00:18:53.349 "peer_address": { 00:18:53.349 "trtype": "TCP", 00:18:53.349 "adrfam": "IPv4", 00:18:53.349 "traddr": "10.0.0.1", 00:18:53.349 "trsvcid": "49540" 00:18:53.349 }, 00:18:53.349 "auth": { 00:18:53.349 "state": "completed", 00:18:53.349 "digest": "sha256", 00:18:53.349 "dhgroup": "ffdhe3072" 00:18:53.349 } 00:18:53.349 } 00:18:53.349 ]' 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.349 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:53.629 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.629 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.629 09:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.888 09:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.453 09:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:54.711 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:55.277 00:18:55.277 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.277 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.277 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.536 { 00:18:55.536 "cntlid": 19, 00:18:55.536 "qid": 0, 00:18:55.536 "state": "enabled", 00:18:55.536 "listen_address": { 00:18:55.536 "trtype": "TCP", 00:18:55.536 "adrfam": "IPv4", 00:18:55.536 "traddr": "10.0.0.2", 00:18:55.536 "trsvcid": "4420" 00:18:55.536 }, 00:18:55.536 "peer_address": { 00:18:55.536 "trtype": "TCP", 00:18:55.536 "adrfam": "IPv4", 00:18:55.536 "traddr": "10.0.0.1", 00:18:55.536 "trsvcid": "49556" 00:18:55.536 }, 00:18:55.536 "auth": { 00:18:55.536 "state": "completed", 00:18:55.536 "digest": "sha256", 00:18:55.536 "dhgroup": "ffdhe3072" 00:18:55.536 } 00:18:55.536 } 00:18:55.536 ]' 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.536 09:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.103 09:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.669 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:56.928 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:57.187 00:18:57.187 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.187 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.187 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:57.753 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.753 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.753 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.753 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.753 09:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.753 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:57.753 { 00:18:57.753 "cntlid": 21, 00:18:57.753 "qid": 0, 00:18:57.753 "state": "enabled", 00:18:57.753 "listen_address": { 00:18:57.753 "trtype": "TCP", 00:18:57.753 "adrfam": "IPv4", 00:18:57.753 "traddr": "10.0.0.2", 00:18:57.753 "trsvcid": "4420" 00:18:57.753 }, 00:18:57.753 "peer_address": { 00:18:57.753 "trtype": "TCP", 00:18:57.753 "adrfam": "IPv4", 00:18:57.753 "traddr": "10.0.0.1", 00:18:57.753 "trsvcid": "57842" 00:18:57.753 }, 00:18:57.753 "auth": { 00:18:57.753 "state": "completed", 00:18:57.753 "digest": "sha256", 00:18:57.753 "dhgroup": "ffdhe3072" 00:18:57.753 } 00:18:57.753 } 00:18:57.753 ]' 00:18:57.754 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:57.754 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.754 09:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:57.754 09:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.754 09:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:57.754 09:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.754 09:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.754 09:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.012 09:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.016 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.590 00:18:59.590 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:59.590 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.590 09:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:59.849 { 00:18:59.849 "cntlid": 23, 00:18:59.849 "qid": 0, 00:18:59.849 "state": "enabled", 00:18:59.849 "listen_address": { 00:18:59.849 "trtype": "TCP", 00:18:59.849 "adrfam": "IPv4", 00:18:59.849 "traddr": "10.0.0.2", 00:18:59.849 "trsvcid": "4420" 00:18:59.849 }, 00:18:59.849 "peer_address": { 00:18:59.849 "trtype": "TCP", 00:18:59.849 "adrfam": "IPv4", 00:18:59.849 "traddr": "10.0.0.1", 00:18:59.849 "trsvcid": "57870" 00:18:59.849 }, 00:18:59.849 "auth": { 00:18:59.849 "state": "completed", 00:18:59.849 "digest": "sha256", 00:18:59.849 "dhgroup": "ffdhe3072" 00:18:59.849 } 00:18:59.849 } 00:18:59.849 ]' 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.849 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.107 09:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.040 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.298 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:19:01.298 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.298 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.298 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:01.298 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.298 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:01.299 09:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.299 09:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.299 09:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.299 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:01.299 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:01.557 00:19:01.557 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:01.557 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:01.557 09:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.814 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.814 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.814 09:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.814 09:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.814 09:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.814 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:01.814 { 00:19:01.815 "cntlid": 25, 00:19:01.815 "qid": 0, 00:19:01.815 "state": "enabled", 00:19:01.815 "listen_address": { 00:19:01.815 "trtype": "TCP", 00:19:01.815 "adrfam": "IPv4", 00:19:01.815 "traddr": "10.0.0.2", 00:19:01.815 "trsvcid": "4420" 00:19:01.815 }, 00:19:01.815 "peer_address": { 00:19:01.815 "trtype": "TCP", 00:19:01.815 "adrfam": "IPv4", 00:19:01.815 "traddr": "10.0.0.1", 00:19:01.815 "trsvcid": "57884" 00:19:01.815 }, 00:19:01.815 "auth": { 00:19:01.815 "state": "completed", 00:19:01.815 "digest": "sha256", 00:19:01.815 "dhgroup": "ffdhe4096" 00:19:01.815 } 00:19:01.815 } 00:19:01.815 ]' 00:19:01.815 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:01.815 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.815 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:01.815 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.815 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.072 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.072 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.072 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.330 09:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.895 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:03.462 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:03.720 00:19:03.720 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:03.720 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:03.720 09:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:03.978 { 00:19:03.978 "cntlid": 27, 00:19:03.978 "qid": 0, 00:19:03.978 "state": "enabled", 00:19:03.978 "listen_address": { 00:19:03.978 "trtype": "TCP", 00:19:03.978 "adrfam": "IPv4", 00:19:03.978 "traddr": "10.0.0.2", 00:19:03.978 "trsvcid": "4420" 00:19:03.978 }, 00:19:03.978 "peer_address": { 00:19:03.978 "trtype": "TCP", 00:19:03.978 "adrfam": "IPv4", 00:19:03.978 "traddr": "10.0.0.1", 00:19:03.978 "trsvcid": "57910" 00:19:03.978 }, 00:19:03.978 "auth": { 00:19:03.978 "state": "completed", 00:19:03.978 "digest": "sha256", 00:19:03.978 "dhgroup": "ffdhe4096" 00:19:03.978 } 00:19:03.978 } 00:19:03.978 ]' 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.978 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.236 09:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:04.801 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.060 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:05.318 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:05.575 00:19:05.575 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:05.575 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:05.575 09:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.832 { 00:19:05.832 "cntlid": 29, 00:19:05.832 "qid": 0, 00:19:05.832 "state": "enabled", 00:19:05.832 "listen_address": { 00:19:05.832 "trtype": "TCP", 00:19:05.832 "adrfam": "IPv4", 00:19:05.832 "traddr": "10.0.0.2", 00:19:05.832 "trsvcid": "4420" 00:19:05.832 }, 00:19:05.832 "peer_address": { 00:19:05.832 "trtype": "TCP", 00:19:05.832 "adrfam": "IPv4", 00:19:05.832 "traddr": "10.0.0.1", 00:19:05.832 "trsvcid": "57942" 00:19:05.832 }, 00:19:05.832 "auth": { 00:19:05.832 "state": "completed", 00:19:05.832 "digest": "sha256", 00:19:05.832 "dhgroup": "ffdhe4096" 00:19:05.832 } 00:19:05.832 } 00:19:05.832 ]' 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.832 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.089 09:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.021 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.279 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.537 00:19:07.537 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:07.537 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:07.537 09:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:07.795 { 00:19:07.795 "cntlid": 31, 00:19:07.795 "qid": 0, 00:19:07.795 "state": "enabled", 00:19:07.795 "listen_address": { 00:19:07.795 "trtype": "TCP", 00:19:07.795 "adrfam": "IPv4", 00:19:07.795 "traddr": "10.0.0.2", 00:19:07.795 "trsvcid": "4420" 00:19:07.795 }, 00:19:07.795 "peer_address": { 00:19:07.795 "trtype": "TCP", 00:19:07.795 "adrfam": "IPv4", 00:19:07.795 "traddr": "10.0.0.1", 00:19:07.795 "trsvcid": "43722" 00:19:07.795 }, 00:19:07.795 "auth": { 00:19:07.795 "state": "completed", 00:19:07.795 "digest": "sha256", 00:19:07.795 "dhgroup": "ffdhe4096" 00:19:07.795 } 00:19:07.795 } 00:19:07.795 ]' 00:19:07.795 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.054 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.310 09:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:08.877 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.136 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:09.395 09:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:09.654 00:19:09.654 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:09.654 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:09.654 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:09.913 { 00:19:09.913 "cntlid": 33, 00:19:09.913 "qid": 0, 00:19:09.913 "state": "enabled", 00:19:09.913 "listen_address": { 00:19:09.913 "trtype": "TCP", 00:19:09.913 "adrfam": "IPv4", 00:19:09.913 "traddr": "10.0.0.2", 00:19:09.913 "trsvcid": "4420" 00:19:09.913 }, 00:19:09.913 "peer_address": { 00:19:09.913 "trtype": "TCP", 00:19:09.913 "adrfam": "IPv4", 00:19:09.913 "traddr": "10.0.0.1", 00:19:09.913 "trsvcid": "43742" 00:19:09.913 }, 00:19:09.913 "auth": { 00:19:09.913 "state": "completed", 00:19:09.913 "digest": "sha256", 00:19:09.913 "dhgroup": "ffdhe6144" 00:19:09.913 } 00:19:09.913 } 00:19:09.913 ]' 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.913 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:10.172 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.172 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.172 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.172 09:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.113 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:11.372 09:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:11.939 00:19:11.939 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:11.939 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:11.939 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:12.206 { 00:19:12.206 "cntlid": 35, 00:19:12.206 "qid": 0, 00:19:12.206 "state": "enabled", 00:19:12.206 "listen_address": { 00:19:12.206 "trtype": "TCP", 00:19:12.206 "adrfam": "IPv4", 00:19:12.206 "traddr": "10.0.0.2", 00:19:12.206 "trsvcid": "4420" 00:19:12.206 }, 00:19:12.206 "peer_address": { 00:19:12.206 "trtype": "TCP", 00:19:12.206 "adrfam": "IPv4", 00:19:12.206 "traddr": "10.0.0.1", 00:19:12.206 "trsvcid": "43768" 00:19:12.206 }, 00:19:12.206 "auth": { 00:19:12.206 "state": "completed", 00:19:12.206 "digest": "sha256", 00:19:12.206 "dhgroup": "ffdhe6144" 00:19:12.206 } 00:19:12.206 } 00:19:12.206 ]' 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.206 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.464 09:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.399 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.657 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:19:13.657 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:13.657 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.657 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.657 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.657 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:13.658 09:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.658 09:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.658 09:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.658 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:13.658 09:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:13.917 00:19:13.917 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:13.917 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.917 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:14.176 { 00:19:14.176 "cntlid": 37, 00:19:14.176 "qid": 0, 00:19:14.176 "state": "enabled", 00:19:14.176 "listen_address": { 00:19:14.176 "trtype": "TCP", 00:19:14.176 "adrfam": "IPv4", 00:19:14.176 "traddr": "10.0.0.2", 00:19:14.176 "trsvcid": "4420" 00:19:14.176 }, 00:19:14.176 "peer_address": { 00:19:14.176 "trtype": "TCP", 00:19:14.176 "adrfam": "IPv4", 00:19:14.176 "traddr": "10.0.0.1", 00:19:14.176 "trsvcid": "43784" 00:19:14.176 }, 00:19:14.176 "auth": { 00:19:14.176 "state": "completed", 00:19:14.176 "digest": "sha256", 00:19:14.176 "dhgroup": "ffdhe6144" 00:19:14.176 } 00:19:14.176 } 00:19:14.176 ]' 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.176 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:14.436 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.436 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:14.436 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.436 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.436 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.695 09:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.262 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.520 09:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.086 00:19:16.086 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:16.086 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.086 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:16.344 { 00:19:16.344 "cntlid": 39, 00:19:16.344 "qid": 0, 00:19:16.344 "state": "enabled", 00:19:16.344 "listen_address": { 00:19:16.344 "trtype": "TCP", 00:19:16.344 "adrfam": "IPv4", 00:19:16.344 "traddr": "10.0.0.2", 00:19:16.344 "trsvcid": "4420" 00:19:16.344 }, 00:19:16.344 "peer_address": { 00:19:16.344 "trtype": "TCP", 00:19:16.344 "adrfam": "IPv4", 00:19:16.344 "traddr": "10.0.0.1", 00:19:16.344 "trsvcid": "43806" 00:19:16.344 }, 00:19:16.344 "auth": { 00:19:16.344 "state": "completed", 00:19:16.344 "digest": "sha256", 00:19:16.344 "dhgroup": "ffdhe6144" 00:19:16.344 } 00:19:16.344 } 00:19:16.344 ]' 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.344 09:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.601 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.535 09:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:17.794 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.359 00:19:18.359 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:18.359 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.359 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:18.618 { 00:19:18.618 "cntlid": 41, 00:19:18.618 "qid": 0, 00:19:18.618 "state": "enabled", 00:19:18.618 "listen_address": { 00:19:18.618 "trtype": "TCP", 00:19:18.618 "adrfam": "IPv4", 00:19:18.618 "traddr": "10.0.0.2", 00:19:18.618 "trsvcid": "4420" 00:19:18.618 }, 00:19:18.618 "peer_address": { 00:19:18.618 "trtype": "TCP", 00:19:18.618 "adrfam": "IPv4", 00:19:18.618 "traddr": "10.0.0.1", 00:19:18.618 "trsvcid": "36838" 00:19:18.618 }, 00:19:18.618 "auth": { 00:19:18.618 "state": "completed", 00:19:18.618 "digest": "sha256", 00:19:18.618 "dhgroup": "ffdhe8192" 00:19:18.618 } 00:19:18.618 } 00:19:18.618 ]' 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.618 09:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:18.618 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.618 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:18.618 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.618 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.618 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.876 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.812 09:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:19.812 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:20.748 00:19:20.748 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:20.748 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:20.748 09:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:20.748 { 00:19:20.748 "cntlid": 43, 00:19:20.748 "qid": 0, 00:19:20.748 "state": "enabled", 00:19:20.748 "listen_address": { 00:19:20.748 "trtype": "TCP", 00:19:20.748 "adrfam": "IPv4", 00:19:20.748 "traddr": "10.0.0.2", 00:19:20.748 "trsvcid": "4420" 00:19:20.748 }, 00:19:20.748 "peer_address": { 00:19:20.748 "trtype": "TCP", 00:19:20.748 "adrfam": "IPv4", 00:19:20.748 "traddr": "10.0.0.1", 00:19:20.748 "trsvcid": "36850" 00:19:20.748 }, 00:19:20.748 "auth": { 00:19:20.748 "state": "completed", 00:19:20.748 "digest": "sha256", 00:19:20.748 "dhgroup": "ffdhe8192" 00:19:20.748 } 00:19:20.748 } 00:19:20.748 ]' 00:19:20.748 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.006 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.264 09:12:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.210 09:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.778 00:19:22.778 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:22.778 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.778 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:23.036 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:23.294 { 00:19:23.294 "cntlid": 45, 00:19:23.294 "qid": 0, 00:19:23.294 "state": "enabled", 00:19:23.294 "listen_address": { 00:19:23.294 "trtype": "TCP", 00:19:23.294 "adrfam": "IPv4", 00:19:23.294 "traddr": "10.0.0.2", 00:19:23.294 "trsvcid": "4420" 00:19:23.294 }, 00:19:23.294 "peer_address": { 00:19:23.294 "trtype": "TCP", 00:19:23.294 "adrfam": "IPv4", 00:19:23.294 "traddr": "10.0.0.1", 00:19:23.294 "trsvcid": "36878" 00:19:23.294 }, 00:19:23.294 "auth": { 00:19:23.294 "state": "completed", 00:19:23.294 "digest": "sha256", 00:19:23.294 "dhgroup": "ffdhe8192" 00:19:23.294 } 00:19:23.294 } 00:19:23.294 ]' 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.294 09:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.862 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.429 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.714 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:19:24.714 09:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.714 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.281 00:19:25.281 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:25.281 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:25.281 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:25.539 { 00:19:25.539 "cntlid": 47, 00:19:25.539 "qid": 0, 00:19:25.539 "state": "enabled", 00:19:25.539 "listen_address": { 00:19:25.539 "trtype": "TCP", 00:19:25.539 "adrfam": "IPv4", 00:19:25.539 "traddr": "10.0.0.2", 00:19:25.539 "trsvcid": "4420" 00:19:25.539 }, 00:19:25.539 "peer_address": { 00:19:25.539 "trtype": "TCP", 00:19:25.539 "adrfam": "IPv4", 00:19:25.539 "traddr": "10.0.0.1", 00:19:25.539 "trsvcid": "36916" 00:19:25.539 }, 00:19:25.539 "auth": { 00:19:25.539 "state": "completed", 00:19:25.539 "digest": "sha256", 00:19:25.539 "dhgroup": "ffdhe8192" 00:19:25.539 } 00:19:25.539 } 00:19:25.539 ]' 00:19:25.539 09:12:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.797 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.055 09:12:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:26.672 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.672 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:26.930 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:27.189 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:27.448 00:19:27.448 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:27.448 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.448 09:12:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:27.706 { 00:19:27.706 "cntlid": 49, 00:19:27.706 "qid": 0, 00:19:27.706 "state": "enabled", 00:19:27.706 "listen_address": { 00:19:27.706 "trtype": "TCP", 00:19:27.706 "adrfam": "IPv4", 00:19:27.706 "traddr": "10.0.0.2", 00:19:27.706 "trsvcid": "4420" 00:19:27.706 }, 00:19:27.706 "peer_address": { 00:19:27.706 "trtype": "TCP", 00:19:27.706 "adrfam": "IPv4", 00:19:27.706 "traddr": "10.0.0.1", 00:19:27.706 "trsvcid": "60212" 00:19:27.706 }, 00:19:27.706 "auth": { 00:19:27.706 "state": "completed", 00:19:27.706 "digest": "sha384", 00:19:27.706 "dhgroup": "null" 00:19:27.706 } 00:19:27.706 } 00:19:27.706 ]' 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:27.706 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.965 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.965 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.965 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.224 09:12:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.791 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:29.050 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:29.309 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:29.568 { 00:19:29.568 "cntlid": 51, 00:19:29.568 "qid": 0, 00:19:29.568 "state": "enabled", 00:19:29.568 "listen_address": { 00:19:29.568 "trtype": "TCP", 00:19:29.568 "adrfam": "IPv4", 00:19:29.568 "traddr": "10.0.0.2", 00:19:29.568 "trsvcid": "4420" 00:19:29.568 }, 00:19:29.568 "peer_address": { 00:19:29.568 "trtype": "TCP", 00:19:29.568 "adrfam": "IPv4", 00:19:29.568 "traddr": "10.0.0.1", 00:19:29.568 "trsvcid": "60240" 00:19:29.568 }, 00:19:29.568 "auth": { 00:19:29.568 "state": "completed", 00:19:29.568 "digest": "sha384", 00:19:29.568 "dhgroup": "null" 00:19:29.568 } 00:19:29.568 } 00:19:29.568 ]' 00:19:29.568 09:12:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.826 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.084 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.651 09:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.908 09:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.909 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:30.909 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:31.166 00:19:31.166 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:31.166 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.166 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:31.734 { 00:19:31.734 "cntlid": 53, 00:19:31.734 "qid": 0, 00:19:31.734 "state": "enabled", 00:19:31.734 "listen_address": { 00:19:31.734 "trtype": "TCP", 00:19:31.734 "adrfam": "IPv4", 00:19:31.734 "traddr": "10.0.0.2", 00:19:31.734 "trsvcid": "4420" 00:19:31.734 }, 00:19:31.734 "peer_address": { 00:19:31.734 "trtype": "TCP", 00:19:31.734 "adrfam": "IPv4", 00:19:31.734 "traddr": "10.0.0.1", 00:19:31.734 "trsvcid": "60256" 00:19:31.734 }, 00:19:31.734 "auth": { 00:19:31.734 "state": "completed", 00:19:31.734 "digest": "sha384", 00:19:31.734 "dhgroup": "null" 00:19:31.734 } 00:19:31.734 } 00:19:31.734 ]' 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:31.734 09:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:31.734 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:31.734 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.734 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.735 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.993 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.561 09:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.129 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.387 00:19:33.387 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:33.387 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.387 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:33.646 { 00:19:33.646 "cntlid": 55, 00:19:33.646 "qid": 0, 00:19:33.646 "state": "enabled", 00:19:33.646 "listen_address": { 00:19:33.646 "trtype": "TCP", 00:19:33.646 "adrfam": "IPv4", 00:19:33.646 "traddr": "10.0.0.2", 00:19:33.646 "trsvcid": "4420" 00:19:33.646 }, 00:19:33.646 "peer_address": { 00:19:33.646 "trtype": "TCP", 00:19:33.646 "adrfam": "IPv4", 00:19:33.646 "traddr": "10.0.0.1", 00:19:33.646 "trsvcid": "60268" 00:19:33.646 }, 00:19:33.646 "auth": { 00:19:33.646 "state": "completed", 00:19:33.646 "digest": "sha384", 00:19:33.646 "dhgroup": "null" 00:19:33.646 } 00:19:33.646 } 00:19:33.646 ]' 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.646 09:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:33.646 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:33.646 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:33.646 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.646 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.646 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.904 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.471 09:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.728 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:19:34.728 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.728 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.728 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.728 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.728 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:34.729 09:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.729 09:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.729 09:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.729 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:34.729 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:34.987 00:19:34.987 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:34.987 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:34.987 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:35.245 { 00:19:35.245 "cntlid": 57, 00:19:35.245 "qid": 0, 00:19:35.245 "state": "enabled", 00:19:35.245 "listen_address": { 00:19:35.245 "trtype": "TCP", 00:19:35.245 "adrfam": "IPv4", 00:19:35.245 "traddr": "10.0.0.2", 00:19:35.245 "trsvcid": "4420" 00:19:35.245 }, 00:19:35.245 "peer_address": { 00:19:35.245 "trtype": "TCP", 00:19:35.245 "adrfam": "IPv4", 00:19:35.245 "traddr": "10.0.0.1", 00:19:35.245 "trsvcid": "60302" 00:19:35.245 }, 00:19:35.245 "auth": { 00:19:35.245 "state": "completed", 00:19:35.245 "digest": "sha384", 00:19:35.245 "dhgroup": "ffdhe2048" 00:19:35.245 } 00:19:35.245 } 00:19:35.245 ]' 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.245 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:35.505 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.505 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.505 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.505 09:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.073 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:36.640 09:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:36.898 00:19:36.898 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:36.898 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:36.898 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.158 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.158 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.158 09:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.158 09:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.158 09:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.158 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:37.158 { 00:19:37.158 "cntlid": 59, 00:19:37.158 "qid": 0, 00:19:37.158 "state": "enabled", 00:19:37.158 "listen_address": { 00:19:37.158 "trtype": "TCP", 00:19:37.159 "adrfam": "IPv4", 00:19:37.159 "traddr": "10.0.0.2", 00:19:37.159 "trsvcid": "4420" 00:19:37.159 }, 00:19:37.159 "peer_address": { 00:19:37.159 "trtype": "TCP", 00:19:37.159 "adrfam": "IPv4", 00:19:37.159 "traddr": "10.0.0.1", 00:19:37.159 "trsvcid": "33066" 00:19:37.159 }, 00:19:37.159 "auth": { 00:19:37.159 "state": "completed", 00:19:37.159 "digest": "sha384", 00:19:37.159 "dhgroup": "ffdhe2048" 00:19:37.159 } 00:19:37.159 } 00:19:37.159 ]' 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.159 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.724 09:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.290 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:38.548 09:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:38.839 00:19:38.839 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:38.839 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:38.839 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:39.103 { 00:19:39.103 "cntlid": 61, 00:19:39.103 "qid": 0, 00:19:39.103 "state": "enabled", 00:19:39.103 "listen_address": { 00:19:39.103 "trtype": "TCP", 00:19:39.103 "adrfam": "IPv4", 00:19:39.103 "traddr": "10.0.0.2", 00:19:39.103 "trsvcid": "4420" 00:19:39.103 }, 00:19:39.103 "peer_address": { 00:19:39.103 "trtype": "TCP", 00:19:39.103 "adrfam": "IPv4", 00:19:39.103 "traddr": "10.0.0.1", 00:19:39.103 "trsvcid": "33098" 00:19:39.103 }, 00:19:39.103 "auth": { 00:19:39.103 "state": "completed", 00:19:39.103 "digest": "sha384", 00:19:39.103 "dhgroup": "ffdhe2048" 00:19:39.103 } 00:19:39.103 } 00:19:39.103 ]' 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.103 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.362 09:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.297 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.555 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.556 09:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.819 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:40.819 { 00:19:40.819 "cntlid": 63, 00:19:40.819 "qid": 0, 00:19:40.819 "state": "enabled", 00:19:40.819 "listen_address": { 00:19:40.819 "trtype": "TCP", 00:19:40.819 "adrfam": "IPv4", 00:19:40.819 "traddr": "10.0.0.2", 00:19:40.819 "trsvcid": "4420" 00:19:40.819 }, 00:19:40.819 "peer_address": { 00:19:40.819 "trtype": "TCP", 00:19:40.819 "adrfam": "IPv4", 00:19:40.819 "traddr": "10.0.0.1", 00:19:40.819 "trsvcid": "33118" 00:19:40.819 }, 00:19:40.819 "auth": { 00:19:40.819 "state": "completed", 00:19:40.819 "digest": "sha384", 00:19:40.819 "dhgroup": "ffdhe2048" 00:19:40.819 } 00:19:40.819 } 00:19:40.819 ]' 00:19:40.819 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.084 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.343 09:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:41.911 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.188 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:19:42.188 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:42.188 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:42.189 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:42.448 00:19:42.448 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:42.448 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.448 09:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:42.707 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.707 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.707 09:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.707 09:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.708 09:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.708 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:42.708 { 00:19:42.708 "cntlid": 65, 00:19:42.708 "qid": 0, 00:19:42.708 "state": "enabled", 00:19:42.708 "listen_address": { 00:19:42.708 "trtype": "TCP", 00:19:42.708 "adrfam": "IPv4", 00:19:42.708 "traddr": "10.0.0.2", 00:19:42.708 "trsvcid": "4420" 00:19:42.708 }, 00:19:42.708 "peer_address": { 00:19:42.708 "trtype": "TCP", 00:19:42.708 "adrfam": "IPv4", 00:19:42.708 "traddr": "10.0.0.1", 00:19:42.708 "trsvcid": "33138" 00:19:42.708 }, 00:19:42.708 "auth": { 00:19:42.708 "state": "completed", 00:19:42.708 "digest": "sha384", 00:19:42.708 "dhgroup": "ffdhe3072" 00:19:42.708 } 00:19:42.708 } 00:19:42.708 ]' 00:19:42.708 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.967 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.226 09:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.795 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.054 09:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.312 09:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.312 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:44.312 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:44.631 00:19:44.631 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:44.631 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.631 09:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:44.631 { 00:19:44.631 "cntlid": 67, 00:19:44.631 "qid": 0, 00:19:44.631 "state": "enabled", 00:19:44.631 "listen_address": { 00:19:44.631 "trtype": "TCP", 00:19:44.631 "adrfam": "IPv4", 00:19:44.631 "traddr": "10.0.0.2", 00:19:44.631 "trsvcid": "4420" 00:19:44.631 }, 00:19:44.631 "peer_address": { 00:19:44.631 "trtype": "TCP", 00:19:44.631 "adrfam": "IPv4", 00:19:44.631 "traddr": "10.0.0.1", 00:19:44.631 "trsvcid": "33172" 00:19:44.631 }, 00:19:44.631 "auth": { 00:19:44.631 "state": "completed", 00:19:44.631 "digest": "sha384", 00:19:44.631 "dhgroup": "ffdhe3072" 00:19:44.631 } 00:19:44.631 } 00:19:44.631 ]' 00:19:44.631 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.889 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.149 09:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:45.719 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.979 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.240 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.498 00:19:46.498 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:46.498 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.498 09:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:46.756 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.756 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.756 09:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.756 09:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.756 09:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.757 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:46.757 { 00:19:46.757 "cntlid": 69, 00:19:46.757 "qid": 0, 00:19:46.757 "state": "enabled", 00:19:46.757 "listen_address": { 00:19:46.757 "trtype": "TCP", 00:19:46.757 "adrfam": "IPv4", 00:19:46.757 "traddr": "10.0.0.2", 00:19:46.757 "trsvcid": "4420" 00:19:46.757 }, 00:19:46.757 "peer_address": { 00:19:46.757 "trtype": "TCP", 00:19:46.757 "adrfam": "IPv4", 00:19:46.757 "traddr": "10.0.0.1", 00:19:46.757 "trsvcid": "50964" 00:19:46.757 }, 00:19:46.757 "auth": { 00:19:46.757 "state": "completed", 00:19:46.757 "digest": "sha384", 00:19:46.757 "dhgroup": "ffdhe3072" 00:19:46.757 } 00:19:46.757 } 00:19:46.757 ]' 00:19:46.757 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:46.757 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.757 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:47.015 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.015 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:47.015 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.015 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.015 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.273 09:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.837 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.093 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:19:48.093 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:48.093 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.093 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.093 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.093 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:48.094 09:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.094 09:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.094 09:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.094 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.094 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.657 00:19:48.657 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:48.657 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.657 09:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:48.914 { 00:19:48.914 "cntlid": 71, 00:19:48.914 "qid": 0, 00:19:48.914 "state": "enabled", 00:19:48.914 "listen_address": { 00:19:48.914 "trtype": "TCP", 00:19:48.914 "adrfam": "IPv4", 00:19:48.914 "traddr": "10.0.0.2", 00:19:48.914 "trsvcid": "4420" 00:19:48.914 }, 00:19:48.914 "peer_address": { 00:19:48.914 "trtype": "TCP", 00:19:48.914 "adrfam": "IPv4", 00:19:48.914 "traddr": "10.0.0.1", 00:19:48.914 "trsvcid": "51000" 00:19:48.914 }, 00:19:48.914 "auth": { 00:19:48.914 "state": "completed", 00:19:48.914 "digest": "sha384", 00:19:48.914 "dhgroup": "ffdhe3072" 00:19:48.914 } 00:19:48.914 } 00:19:48.914 ]' 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.914 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.173 09:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.738 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:49.996 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:50.294 00:19:50.294 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:50.294 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.294 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:50.551 { 00:19:50.551 "cntlid": 73, 00:19:50.551 "qid": 0, 00:19:50.551 "state": "enabled", 00:19:50.551 "listen_address": { 00:19:50.551 "trtype": "TCP", 00:19:50.551 "adrfam": "IPv4", 00:19:50.551 "traddr": "10.0.0.2", 00:19:50.551 "trsvcid": "4420" 00:19:50.551 }, 00:19:50.551 "peer_address": { 00:19:50.551 "trtype": "TCP", 00:19:50.551 "adrfam": "IPv4", 00:19:50.551 "traddr": "10.0.0.1", 00:19:50.551 "trsvcid": "51028" 00:19:50.551 }, 00:19:50.551 "auth": { 00:19:50.551 "state": "completed", 00:19:50.551 "digest": "sha384", 00:19:50.551 "dhgroup": "ffdhe4096" 00:19:50.551 } 00:19:50.551 } 00:19:50.551 ]' 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.551 09:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:50.808 09:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.808 09:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.808 09:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.066 09:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.630 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:51.887 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.888 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.888 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.888 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:51.888 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:52.453 00:19:52.453 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:52.453 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.453 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.711 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.711 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.711 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.711 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.711 09:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.711 09:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:52.711 { 00:19:52.711 "cntlid": 75, 00:19:52.711 "qid": 0, 00:19:52.711 "state": "enabled", 00:19:52.711 "listen_address": { 00:19:52.711 "trtype": "TCP", 00:19:52.711 "adrfam": "IPv4", 00:19:52.711 "traddr": "10.0.0.2", 00:19:52.711 "trsvcid": "4420" 00:19:52.711 }, 00:19:52.711 "peer_address": { 00:19:52.711 "trtype": "TCP", 00:19:52.711 "adrfam": "IPv4", 00:19:52.711 "traddr": "10.0.0.1", 00:19:52.711 "trsvcid": "51056" 00:19:52.711 }, 00:19:52.711 "auth": { 00:19:52.711 "state": "completed", 00:19:52.711 "digest": "sha384", 00:19:52.711 "dhgroup": "ffdhe4096" 00:19:52.711 } 00:19:52.711 } 00:19:52.711 ]' 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.711 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.970 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:19:53.905 09:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.905 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:54.163 00:19:54.163 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:54.163 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.163 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:54.421 { 00:19:54.421 "cntlid": 77, 00:19:54.421 "qid": 0, 00:19:54.421 "state": "enabled", 00:19:54.421 "listen_address": { 00:19:54.421 "trtype": "TCP", 00:19:54.421 "adrfam": "IPv4", 00:19:54.421 "traddr": "10.0.0.2", 00:19:54.421 "trsvcid": "4420" 00:19:54.421 }, 00:19:54.421 "peer_address": { 00:19:54.421 "trtype": "TCP", 00:19:54.421 "adrfam": "IPv4", 00:19:54.421 "traddr": "10.0.0.1", 00:19:54.421 "trsvcid": "51086" 00:19:54.421 }, 00:19:54.421 "auth": { 00:19:54.421 "state": "completed", 00:19:54.421 "digest": "sha384", 00:19:54.421 "dhgroup": "ffdhe4096" 00:19:54.421 } 00:19:54.421 } 00:19:54.421 ]' 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.421 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.680 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.680 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.680 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.680 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.680 09:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.937 09:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.501 09:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.759 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.017 00:19:56.017 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.017 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.017 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.275 { 00:19:56.275 "cntlid": 79, 00:19:56.275 "qid": 0, 00:19:56.275 "state": "enabled", 00:19:56.275 "listen_address": { 00:19:56.275 "trtype": "TCP", 00:19:56.275 "adrfam": "IPv4", 00:19:56.275 "traddr": "10.0.0.2", 00:19:56.275 "trsvcid": "4420" 00:19:56.275 }, 00:19:56.275 "peer_address": { 00:19:56.275 "trtype": "TCP", 00:19:56.275 "adrfam": "IPv4", 00:19:56.275 "traddr": "10.0.0.1", 00:19:56.275 "trsvcid": "51104" 00:19:56.275 }, 00:19:56.275 "auth": { 00:19:56.275 "state": "completed", 00:19:56.275 "digest": "sha384", 00:19:56.275 "dhgroup": "ffdhe4096" 00:19:56.275 } 00:19:56.275 } 00:19:56.275 ]' 00:19:56.275 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.534 09:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.793 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.728 09:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:57.728 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.336 00:19:58.336 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:58.336 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:58.336 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.336 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.336 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.336 09:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:58.595 { 00:19:58.595 "cntlid": 81, 00:19:58.595 "qid": 0, 00:19:58.595 "state": "enabled", 00:19:58.595 "listen_address": { 00:19:58.595 "trtype": "TCP", 00:19:58.595 "adrfam": "IPv4", 00:19:58.595 "traddr": "10.0.0.2", 00:19:58.595 "trsvcid": "4420" 00:19:58.595 }, 00:19:58.595 "peer_address": { 00:19:58.595 "trtype": "TCP", 00:19:58.595 "adrfam": "IPv4", 00:19:58.595 "traddr": "10.0.0.1", 00:19:58.595 "trsvcid": "33666" 00:19:58.595 }, 00:19:58.595 "auth": { 00:19:58.595 "state": "completed", 00:19:58.595 "digest": "sha384", 00:19:58.595 "dhgroup": "ffdhe6144" 00:19:58.595 } 00:19:58.595 } 00:19:58.595 ]' 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.595 09:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.854 09:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.790 09:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:59.790 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:00.357 00:20:00.357 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:00.357 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:00.357 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.615 { 00:20:00.615 "cntlid": 83, 00:20:00.615 "qid": 0, 00:20:00.615 "state": "enabled", 00:20:00.615 "listen_address": { 00:20:00.615 "trtype": "TCP", 00:20:00.615 "adrfam": "IPv4", 00:20:00.615 "traddr": "10.0.0.2", 00:20:00.615 "trsvcid": "4420" 00:20:00.615 }, 00:20:00.615 "peer_address": { 00:20:00.615 "trtype": "TCP", 00:20:00.615 "adrfam": "IPv4", 00:20:00.615 "traddr": "10.0.0.1", 00:20:00.615 "trsvcid": "33706" 00:20:00.615 }, 00:20:00.615 "auth": { 00:20:00.615 "state": "completed", 00:20:00.615 "digest": "sha384", 00:20:00.615 "dhgroup": "ffdhe6144" 00:20:00.615 } 00:20:00.615 } 00:20:00.615 ]' 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.615 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.616 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:00.616 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.616 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:00.616 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.616 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.616 09:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.873 09:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.439 09:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.697 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:01.697 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:01.697 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:01.698 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:02.264 00:20:02.264 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:02.264 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:02.264 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:02.522 { 00:20:02.522 "cntlid": 85, 00:20:02.522 "qid": 0, 00:20:02.522 "state": "enabled", 00:20:02.522 "listen_address": { 00:20:02.522 "trtype": "TCP", 00:20:02.522 "adrfam": "IPv4", 00:20:02.522 "traddr": "10.0.0.2", 00:20:02.522 "trsvcid": "4420" 00:20:02.522 }, 00:20:02.522 "peer_address": { 00:20:02.522 "trtype": "TCP", 00:20:02.522 "adrfam": "IPv4", 00:20:02.522 "traddr": "10.0.0.1", 00:20:02.522 "trsvcid": "33722" 00:20:02.522 }, 00:20:02.522 "auth": { 00:20:02.522 "state": "completed", 00:20:02.522 "digest": "sha384", 00:20:02.522 "dhgroup": "ffdhe6144" 00:20:02.522 } 00:20:02.522 } 00:20:02.522 ]' 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.522 09:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.780 09:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.714 09:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.714 09:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.973 09:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.973 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.973 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.231 00:20:04.231 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:04.231 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:04.231 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:04.489 { 00:20:04.489 "cntlid": 87, 00:20:04.489 "qid": 0, 00:20:04.489 "state": "enabled", 00:20:04.489 "listen_address": { 00:20:04.489 "trtype": "TCP", 00:20:04.489 "adrfam": "IPv4", 00:20:04.489 "traddr": "10.0.0.2", 00:20:04.489 "trsvcid": "4420" 00:20:04.489 }, 00:20:04.489 "peer_address": { 00:20:04.489 "trtype": "TCP", 00:20:04.489 "adrfam": "IPv4", 00:20:04.489 "traddr": "10.0.0.1", 00:20:04.489 "trsvcid": "33744" 00:20:04.489 }, 00:20:04.489 "auth": { 00:20:04.489 "state": "completed", 00:20:04.489 "digest": "sha384", 00:20:04.489 "dhgroup": "ffdhe6144" 00:20:04.489 } 00:20:04.489 } 00:20:04.489 ]' 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.489 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:04.748 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.748 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:04.748 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.748 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.748 09:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.006 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.573 09:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:05.882 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:06.449 00:20:06.449 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:06.449 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.449 09:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:06.707 { 00:20:06.707 "cntlid": 89, 00:20:06.707 "qid": 0, 00:20:06.707 "state": "enabled", 00:20:06.707 "listen_address": { 00:20:06.707 "trtype": "TCP", 00:20:06.707 "adrfam": "IPv4", 00:20:06.707 "traddr": "10.0.0.2", 00:20:06.707 "trsvcid": "4420" 00:20:06.707 }, 00:20:06.707 "peer_address": { 00:20:06.707 "trtype": "TCP", 00:20:06.707 "adrfam": "IPv4", 00:20:06.707 "traddr": "10.0.0.1", 00:20:06.707 "trsvcid": "33776" 00:20:06.707 }, 00:20:06.707 "auth": { 00:20:06.707 "state": "completed", 00:20:06.707 "digest": "sha384", 00:20:06.707 "dhgroup": "ffdhe8192" 00:20:06.707 } 00:20:06.707 } 00:20:06.707 ]' 00:20:06.707 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.966 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.225 09:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.790 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.048 09:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.049 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:08.049 09:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:08.615 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:08.871 { 00:20:08.871 "cntlid": 91, 00:20:08.871 "qid": 0, 00:20:08.871 "state": "enabled", 00:20:08.871 "listen_address": { 00:20:08.871 "trtype": "TCP", 00:20:08.871 "adrfam": "IPv4", 00:20:08.871 "traddr": "10.0.0.2", 00:20:08.871 "trsvcid": "4420" 00:20:08.871 }, 00:20:08.871 "peer_address": { 00:20:08.871 "trtype": "TCP", 00:20:08.871 "adrfam": "IPv4", 00:20:08.871 "traddr": "10.0.0.1", 00:20:08.871 "trsvcid": "51082" 00:20:08.871 }, 00:20:08.871 "auth": { 00:20:08.871 "state": "completed", 00:20:08.871 "digest": "sha384", 00:20:08.871 "dhgroup": "ffdhe8192" 00:20:08.871 } 00:20:08.871 } 00:20:08.871 ]' 00:20:08.871 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.129 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.387 09:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.952 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:10.209 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:10.210 09:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:10.807 00:20:10.807 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:10.808 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.808 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:11.081 { 00:20:11.081 "cntlid": 93, 00:20:11.081 "qid": 0, 00:20:11.081 "state": "enabled", 00:20:11.081 "listen_address": { 00:20:11.081 "trtype": "TCP", 00:20:11.081 "adrfam": "IPv4", 00:20:11.081 "traddr": "10.0.0.2", 00:20:11.081 "trsvcid": "4420" 00:20:11.081 }, 00:20:11.081 "peer_address": { 00:20:11.081 "trtype": "TCP", 00:20:11.081 "adrfam": "IPv4", 00:20:11.081 "traddr": "10.0.0.1", 00:20:11.081 "trsvcid": "51116" 00:20:11.081 }, 00:20:11.081 "auth": { 00:20:11.081 "state": "completed", 00:20:11.081 "digest": "sha384", 00:20:11.081 "dhgroup": "ffdhe8192" 00:20:11.081 } 00:20:11.081 } 00:20:11.081 ]' 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.081 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:11.339 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.339 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:11.339 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.339 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.339 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.596 09:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.162 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.726 09:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.291 00:20:13.291 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:13.291 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:13.291 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:13.549 { 00:20:13.549 "cntlid": 95, 00:20:13.549 "qid": 0, 00:20:13.549 "state": "enabled", 00:20:13.549 "listen_address": { 00:20:13.549 "trtype": "TCP", 00:20:13.549 "adrfam": "IPv4", 00:20:13.549 "traddr": "10.0.0.2", 00:20:13.549 "trsvcid": "4420" 00:20:13.549 }, 00:20:13.549 "peer_address": { 00:20:13.549 "trtype": "TCP", 00:20:13.549 "adrfam": "IPv4", 00:20:13.549 "traddr": "10.0.0.1", 00:20:13.549 "trsvcid": "51146" 00:20:13.549 }, 00:20:13.549 "auth": { 00:20:13.549 "state": "completed", 00:20:13.549 "digest": "sha384", 00:20:13.549 "dhgroup": "ffdhe8192" 00:20:13.549 } 00:20:13.549 } 00:20:13.549 ]' 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.549 09:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.805 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:14.369 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.626 09:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:14.884 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:15.141 00:20:15.141 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:15.141 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.141 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:15.399 { 00:20:15.399 "cntlid": 97, 00:20:15.399 "qid": 0, 00:20:15.399 "state": "enabled", 00:20:15.399 "listen_address": { 00:20:15.399 "trtype": "TCP", 00:20:15.399 "adrfam": "IPv4", 00:20:15.399 "traddr": "10.0.0.2", 00:20:15.399 "trsvcid": "4420" 00:20:15.399 }, 00:20:15.399 "peer_address": { 00:20:15.399 "trtype": "TCP", 00:20:15.399 "adrfam": "IPv4", 00:20:15.399 "traddr": "10.0.0.1", 00:20:15.399 "trsvcid": "51174" 00:20:15.399 }, 00:20:15.399 "auth": { 00:20:15.399 "state": "completed", 00:20:15.399 "digest": "sha512", 00:20:15.399 "dhgroup": "null" 00:20:15.399 } 00:20:15.399 } 00:20:15.399 ]' 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.399 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.656 09:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:16.590 09:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:17.156 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:17.156 { 00:20:17.156 "cntlid": 99, 00:20:17.156 "qid": 0, 00:20:17.156 "state": "enabled", 00:20:17.156 "listen_address": { 00:20:17.156 "trtype": "TCP", 00:20:17.156 "adrfam": "IPv4", 00:20:17.156 "traddr": "10.0.0.2", 00:20:17.156 "trsvcid": "4420" 00:20:17.156 }, 00:20:17.156 "peer_address": { 00:20:17.156 "trtype": "TCP", 00:20:17.156 "adrfam": "IPv4", 00:20:17.156 "traddr": "10.0.0.1", 00:20:17.156 "trsvcid": "52804" 00:20:17.156 }, 00:20:17.156 "auth": { 00:20:17.156 "state": "completed", 00:20:17.156 "digest": "sha512", 00:20:17.156 "dhgroup": "null" 00:20:17.156 } 00:20:17.156 } 00:20:17.156 ]' 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.156 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:17.415 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:17.415 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:17.415 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.415 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.415 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.674 09:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.239 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.497 09:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:18.755 00:20:18.755 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:18.755 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.755 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:19.012 { 00:20:19.012 "cntlid": 101, 00:20:19.012 "qid": 0, 00:20:19.012 "state": "enabled", 00:20:19.012 "listen_address": { 00:20:19.012 "trtype": "TCP", 00:20:19.012 "adrfam": "IPv4", 00:20:19.012 "traddr": "10.0.0.2", 00:20:19.012 "trsvcid": "4420" 00:20:19.012 }, 00:20:19.012 "peer_address": { 00:20:19.012 "trtype": "TCP", 00:20:19.012 "adrfam": "IPv4", 00:20:19.012 "traddr": "10.0.0.1", 00:20:19.012 "trsvcid": "52840" 00:20:19.012 }, 00:20:19.012 "auth": { 00:20:19.012 "state": "completed", 00:20:19.012 "digest": "sha512", 00:20:19.012 "dhgroup": "null" 00:20:19.012 } 00:20:19.012 } 00:20:19.012 ]' 00:20:19.012 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.269 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.527 09:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:20.100 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.100 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:20.100 09:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.100 09:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.100 09:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.100 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:20.101 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.101 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.361 09:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.618 09:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.618 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.618 09:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.909 00:20:20.909 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:20.909 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:20.909 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:21.165 { 00:20:21.165 "cntlid": 103, 00:20:21.165 "qid": 0, 00:20:21.165 "state": "enabled", 00:20:21.165 "listen_address": { 00:20:21.165 "trtype": "TCP", 00:20:21.165 "adrfam": "IPv4", 00:20:21.165 "traddr": "10.0.0.2", 00:20:21.165 "trsvcid": "4420" 00:20:21.165 }, 00:20:21.165 "peer_address": { 00:20:21.165 "trtype": "TCP", 00:20:21.165 "adrfam": "IPv4", 00:20:21.165 "traddr": "10.0.0.1", 00:20:21.165 "trsvcid": "52878" 00:20:21.165 }, 00:20:21.165 "auth": { 00:20:21.165 "state": "completed", 00:20:21.165 "digest": "sha512", 00:20:21.165 "dhgroup": "null" 00:20:21.165 } 00:20:21.165 } 00:20:21.165 ]' 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.165 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.422 09:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.021 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:22.278 09:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:22.843 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.843 09:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:23.099 { 00:20:23.099 "cntlid": 105, 00:20:23.099 "qid": 0, 00:20:23.099 "state": "enabled", 00:20:23.099 "listen_address": { 00:20:23.099 "trtype": "TCP", 00:20:23.099 "adrfam": "IPv4", 00:20:23.099 "traddr": "10.0.0.2", 00:20:23.099 "trsvcid": "4420" 00:20:23.099 }, 00:20:23.099 "peer_address": { 00:20:23.099 "trtype": "TCP", 00:20:23.099 "adrfam": "IPv4", 00:20:23.099 "traddr": "10.0.0.1", 00:20:23.099 "trsvcid": "52896" 00:20:23.099 }, 00:20:23.099 "auth": { 00:20:23.099 "state": "completed", 00:20:23.099 "digest": "sha512", 00:20:23.099 "dhgroup": "ffdhe2048" 00:20:23.099 } 00:20:23.099 } 00:20:23.099 ]' 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.099 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.356 09:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.287 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:24.545 09:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:24.803 00:20:24.803 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:24.803 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:24.803 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:25.060 { 00:20:25.060 "cntlid": 107, 00:20:25.060 "qid": 0, 00:20:25.060 "state": "enabled", 00:20:25.060 "listen_address": { 00:20:25.060 "trtype": "TCP", 00:20:25.060 "adrfam": "IPv4", 00:20:25.060 "traddr": "10.0.0.2", 00:20:25.060 "trsvcid": "4420" 00:20:25.060 }, 00:20:25.060 "peer_address": { 00:20:25.060 "trtype": "TCP", 00:20:25.060 "adrfam": "IPv4", 00:20:25.060 "traddr": "10.0.0.1", 00:20:25.060 "trsvcid": "52924" 00:20:25.060 }, 00:20:25.060 "auth": { 00:20:25.060 "state": "completed", 00:20:25.060 "digest": "sha512", 00:20:25.060 "dhgroup": "ffdhe2048" 00:20:25.060 } 00:20:25.060 } 00:20:25.060 ]' 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.060 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.317 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.317 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.317 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.317 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.317 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.575 09:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.510 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:26.511 09:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:27.076 00:20:27.076 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.076 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.076 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.334 { 00:20:27.334 "cntlid": 109, 00:20:27.334 "qid": 0, 00:20:27.334 "state": "enabled", 00:20:27.334 "listen_address": { 00:20:27.334 "trtype": "TCP", 00:20:27.334 "adrfam": "IPv4", 00:20:27.334 "traddr": "10.0.0.2", 00:20:27.334 "trsvcid": "4420" 00:20:27.334 }, 00:20:27.334 "peer_address": { 00:20:27.334 "trtype": "TCP", 00:20:27.334 "adrfam": "IPv4", 00:20:27.334 "traddr": "10.0.0.1", 00:20:27.334 "trsvcid": "34730" 00:20:27.334 }, 00:20:27.334 "auth": { 00:20:27.334 "state": "completed", 00:20:27.334 "digest": "sha512", 00:20:27.334 "dhgroup": "ffdhe2048" 00:20:27.334 } 00:20:27.334 } 00:20:27.334 ]' 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.334 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.592 09:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.576 09:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.143 00:20:29.143 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:29.143 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.143 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:29.402 { 00:20:29.402 "cntlid": 111, 00:20:29.402 "qid": 0, 00:20:29.402 "state": "enabled", 00:20:29.402 "listen_address": { 00:20:29.402 "trtype": "TCP", 00:20:29.402 "adrfam": "IPv4", 00:20:29.402 "traddr": "10.0.0.2", 00:20:29.402 "trsvcid": "4420" 00:20:29.402 }, 00:20:29.402 "peer_address": { 00:20:29.402 "trtype": "TCP", 00:20:29.402 "adrfam": "IPv4", 00:20:29.402 "traddr": "10.0.0.1", 00:20:29.402 "trsvcid": "34744" 00:20:29.402 }, 00:20:29.402 "auth": { 00:20:29.402 "state": "completed", 00:20:29.402 "digest": "sha512", 00:20:29.402 "dhgroup": "ffdhe2048" 00:20:29.402 } 00:20:29.402 } 00:20:29.402 ]' 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.402 09:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.660 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:30.595 09:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:30.853 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:31.111 00:20:31.111 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:31.111 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.111 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:31.369 { 00:20:31.369 "cntlid": 113, 00:20:31.369 "qid": 0, 00:20:31.369 "state": "enabled", 00:20:31.369 "listen_address": { 00:20:31.369 "trtype": "TCP", 00:20:31.369 "adrfam": "IPv4", 00:20:31.369 "traddr": "10.0.0.2", 00:20:31.369 "trsvcid": "4420" 00:20:31.369 }, 00:20:31.369 "peer_address": { 00:20:31.369 "trtype": "TCP", 00:20:31.369 "adrfam": "IPv4", 00:20:31.369 "traddr": "10.0.0.1", 00:20:31.369 "trsvcid": "34762" 00:20:31.369 }, 00:20:31.369 "auth": { 00:20:31.369 "state": "completed", 00:20:31.369 "digest": "sha512", 00:20:31.369 "dhgroup": "ffdhe3072" 00:20:31.369 } 00:20:31.369 } 00:20:31.369 ]' 00:20:31.369 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.627 09:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.885 09:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:32.451 09:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:32.710 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:33.277 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:33.277 { 00:20:33.277 "cntlid": 115, 00:20:33.277 "qid": 0, 00:20:33.277 "state": "enabled", 00:20:33.277 "listen_address": { 00:20:33.277 "trtype": "TCP", 00:20:33.277 "adrfam": "IPv4", 00:20:33.277 "traddr": "10.0.0.2", 00:20:33.277 "trsvcid": "4420" 00:20:33.277 }, 00:20:33.277 "peer_address": { 00:20:33.277 "trtype": "TCP", 00:20:33.277 "adrfam": "IPv4", 00:20:33.277 "traddr": "10.0.0.1", 00:20:33.277 "trsvcid": "34776" 00:20:33.277 }, 00:20:33.277 "auth": { 00:20:33.277 "state": "completed", 00:20:33.277 "digest": "sha512", 00:20:33.277 "dhgroup": "ffdhe3072" 00:20:33.277 } 00:20:33.277 } 00:20:33.277 ]' 00:20:33.277 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.536 09:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.793 09:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.359 09:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.617 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:20:34.617 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:34.617 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:34.618 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.239 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.239 09:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:35.497 { 00:20:35.497 "cntlid": 117, 00:20:35.497 "qid": 0, 00:20:35.497 "state": "enabled", 00:20:35.497 "listen_address": { 00:20:35.497 "trtype": "TCP", 00:20:35.497 "adrfam": "IPv4", 00:20:35.497 "traddr": "10.0.0.2", 00:20:35.497 "trsvcid": "4420" 00:20:35.497 }, 00:20:35.497 "peer_address": { 00:20:35.497 "trtype": "TCP", 00:20:35.497 "adrfam": "IPv4", 00:20:35.497 "traddr": "10.0.0.1", 00:20:35.497 "trsvcid": "34804" 00:20:35.497 }, 00:20:35.497 "auth": { 00:20:35.497 "state": "completed", 00:20:35.497 "digest": "sha512", 00:20:35.497 "dhgroup": "ffdhe3072" 00:20:35.497 } 00:20:35.497 } 00:20:35.497 ]' 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.497 09:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.756 09:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.692 09:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.692 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.951 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:37.209 { 00:20:37.209 "cntlid": 119, 00:20:37.209 "qid": 0, 00:20:37.209 "state": "enabled", 00:20:37.209 "listen_address": { 00:20:37.209 "trtype": "TCP", 00:20:37.209 "adrfam": "IPv4", 00:20:37.209 "traddr": "10.0.0.2", 00:20:37.209 "trsvcid": "4420" 00:20:37.209 }, 00:20:37.209 "peer_address": { 00:20:37.209 "trtype": "TCP", 00:20:37.209 "adrfam": "IPv4", 00:20:37.209 "traddr": "10.0.0.1", 00:20:37.209 "trsvcid": "53374" 00:20:37.209 }, 00:20:37.209 "auth": { 00:20:37.209 "state": "completed", 00:20:37.209 "digest": "sha512", 00:20:37.209 "dhgroup": "ffdhe3072" 00:20:37.209 } 00:20:37.209 } 00:20:37.209 ]' 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:37.209 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.467 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:37.467 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.467 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:37.467 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.467 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.467 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.725 09:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.290 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.549 09:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.806 00:20:38.806 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:38.806 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:38.806 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:39.373 { 00:20:39.373 "cntlid": 121, 00:20:39.373 "qid": 0, 00:20:39.373 "state": "enabled", 00:20:39.373 "listen_address": { 00:20:39.373 "trtype": "TCP", 00:20:39.373 "adrfam": "IPv4", 00:20:39.373 "traddr": "10.0.0.2", 00:20:39.373 "trsvcid": "4420" 00:20:39.373 }, 00:20:39.373 "peer_address": { 00:20:39.373 "trtype": "TCP", 00:20:39.373 "adrfam": "IPv4", 00:20:39.373 "traddr": "10.0.0.1", 00:20:39.373 "trsvcid": "53402" 00:20:39.373 }, 00:20:39.373 "auth": { 00:20:39.373 "state": "completed", 00:20:39.373 "digest": "sha512", 00:20:39.373 "dhgroup": "ffdhe4096" 00:20:39.373 } 00:20:39.373 } 00:20:39.373 ]' 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.373 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.641 09:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:40.212 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:40.471 09:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:41.039 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.039 09:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:41.296 { 00:20:41.296 "cntlid": 123, 00:20:41.296 "qid": 0, 00:20:41.296 "state": "enabled", 00:20:41.296 "listen_address": { 00:20:41.296 "trtype": "TCP", 00:20:41.296 "adrfam": "IPv4", 00:20:41.296 "traddr": "10.0.0.2", 00:20:41.296 "trsvcid": "4420" 00:20:41.296 }, 00:20:41.296 "peer_address": { 00:20:41.296 "trtype": "TCP", 00:20:41.296 "adrfam": "IPv4", 00:20:41.296 "traddr": "10.0.0.1", 00:20:41.296 "trsvcid": "53424" 00:20:41.296 }, 00:20:41.296 "auth": { 00:20:41.296 "state": "completed", 00:20:41.296 "digest": "sha512", 00:20:41.296 "dhgroup": "ffdhe4096" 00:20:41.296 } 00:20:41.296 } 00:20:41.296 ]' 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.296 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.561 09:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:42.129 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.387 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:42.646 09:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:42.907 00:20:42.907 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:42.907 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.907 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:43.166 { 00:20:43.166 "cntlid": 125, 00:20:43.166 "qid": 0, 00:20:43.166 "state": "enabled", 00:20:43.166 "listen_address": { 00:20:43.166 "trtype": "TCP", 00:20:43.166 "adrfam": "IPv4", 00:20:43.166 "traddr": "10.0.0.2", 00:20:43.166 "trsvcid": "4420" 00:20:43.166 }, 00:20:43.166 "peer_address": { 00:20:43.166 "trtype": "TCP", 00:20:43.166 "adrfam": "IPv4", 00:20:43.166 "traddr": "10.0.0.1", 00:20:43.166 "trsvcid": "53464" 00:20:43.166 }, 00:20:43.166 "auth": { 00:20:43.166 "state": "completed", 00:20:43.166 "digest": "sha512", 00:20:43.166 "dhgroup": "ffdhe4096" 00:20:43.166 } 00:20:43.166 } 00:20:43.166 ]' 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.166 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.426 09:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:44.001 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.001 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:44.001 09:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.269 09:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.269 09:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.269 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:44.269 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.269 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.541 09:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.813 00:20:44.813 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:44.813 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:44.813 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:45.088 { 00:20:45.088 "cntlid": 127, 00:20:45.088 "qid": 0, 00:20:45.088 "state": "enabled", 00:20:45.088 "listen_address": { 00:20:45.088 "trtype": "TCP", 00:20:45.088 "adrfam": "IPv4", 00:20:45.088 "traddr": "10.0.0.2", 00:20:45.088 "trsvcid": "4420" 00:20:45.088 }, 00:20:45.088 "peer_address": { 00:20:45.088 "trtype": "TCP", 00:20:45.088 "adrfam": "IPv4", 00:20:45.088 "traddr": "10.0.0.1", 00:20:45.088 "trsvcid": "53486" 00:20:45.088 }, 00:20:45.088 "auth": { 00:20:45.088 "state": "completed", 00:20:45.088 "digest": "sha512", 00:20:45.088 "dhgroup": "ffdhe4096" 00:20:45.088 } 00:20:45.088 } 00:20:45.088 ]' 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:45.088 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.359 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:45.359 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.359 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.359 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.621 09:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.186 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:46.445 09:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:47.011 00:20:47.011 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:47.011 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.011 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:47.269 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.269 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.269 09:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.269 09:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.269 09:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.269 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:47.269 { 00:20:47.269 "cntlid": 129, 00:20:47.269 "qid": 0, 00:20:47.269 "state": "enabled", 00:20:47.269 "listen_address": { 00:20:47.269 "trtype": "TCP", 00:20:47.269 "adrfam": "IPv4", 00:20:47.269 "traddr": "10.0.0.2", 00:20:47.269 "trsvcid": "4420" 00:20:47.269 }, 00:20:47.269 "peer_address": { 00:20:47.269 "trtype": "TCP", 00:20:47.269 "adrfam": "IPv4", 00:20:47.269 "traddr": "10.0.0.1", 00:20:47.269 "trsvcid": "47002" 00:20:47.269 }, 00:20:47.269 "auth": { 00:20:47.269 "state": "completed", 00:20:47.269 "digest": "sha512", 00:20:47.269 "dhgroup": "ffdhe6144" 00:20:47.270 } 00:20:47.270 } 00:20:47.270 ]' 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.270 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.528 09:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:48.096 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.096 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:48.096 09:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.096 09:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.355 09:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.355 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:48.355 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.355 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.614 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.615 09:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.873 00:20:48.873 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:48.873 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:48.873 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.134 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.134 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.134 09:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.134 09:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.396 09:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.396 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:49.396 { 00:20:49.396 "cntlid": 131, 00:20:49.396 "qid": 0, 00:20:49.396 "state": "enabled", 00:20:49.396 "listen_address": { 00:20:49.396 "trtype": "TCP", 00:20:49.396 "adrfam": "IPv4", 00:20:49.396 "traddr": "10.0.0.2", 00:20:49.396 "trsvcid": "4420" 00:20:49.396 }, 00:20:49.396 "peer_address": { 00:20:49.396 "trtype": "TCP", 00:20:49.396 "adrfam": "IPv4", 00:20:49.396 "traddr": "10.0.0.1", 00:20:49.396 "trsvcid": "47032" 00:20:49.396 }, 00:20:49.396 "auth": { 00:20:49.396 "state": "completed", 00:20:49.396 "digest": "sha512", 00:20:49.396 "dhgroup": "ffdhe6144" 00:20:49.396 } 00:20:49.396 } 00:20:49.396 ]' 00:20:49.396 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:49.396 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.396 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:49.397 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.397 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:49.397 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.397 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.397 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.662 09:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:50.252 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.514 09:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:51.080 00:20:51.080 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:51.080 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:51.080 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:51.339 { 00:20:51.339 "cntlid": 133, 00:20:51.339 "qid": 0, 00:20:51.339 "state": "enabled", 00:20:51.339 "listen_address": { 00:20:51.339 "trtype": "TCP", 00:20:51.339 "adrfam": "IPv4", 00:20:51.339 "traddr": "10.0.0.2", 00:20:51.339 "trsvcid": "4420" 00:20:51.339 }, 00:20:51.339 "peer_address": { 00:20:51.339 "trtype": "TCP", 00:20:51.339 "adrfam": "IPv4", 00:20:51.339 "traddr": "10.0.0.1", 00:20:51.339 "trsvcid": "47046" 00:20:51.339 }, 00:20:51.339 "auth": { 00:20:51.339 "state": "completed", 00:20:51.339 "digest": "sha512", 00:20:51.339 "dhgroup": "ffdhe6144" 00:20:51.339 } 00:20:51.339 } 00:20:51.339 ]' 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.339 09:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.598 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.532 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.533 09:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.101 00:20:53.101 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:53.101 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:53.101 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.360 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.360 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.360 09:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.360 09:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.360 09:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.360 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:53.360 { 00:20:53.360 "cntlid": 135, 00:20:53.360 "qid": 0, 00:20:53.360 "state": "enabled", 00:20:53.360 "listen_address": { 00:20:53.360 "trtype": "TCP", 00:20:53.360 "adrfam": "IPv4", 00:20:53.360 "traddr": "10.0.0.2", 00:20:53.360 "trsvcid": "4420" 00:20:53.360 }, 00:20:53.360 "peer_address": { 00:20:53.361 "trtype": "TCP", 00:20:53.361 "adrfam": "IPv4", 00:20:53.361 "traddr": "10.0.0.1", 00:20:53.361 "trsvcid": "47076" 00:20:53.361 }, 00:20:53.361 "auth": { 00:20:53.361 "state": "completed", 00:20:53.361 "digest": "sha512", 00:20:53.361 "dhgroup": "ffdhe6144" 00:20:53.361 } 00:20:53.361 } 00:20:53.361 ]' 00:20:53.361 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:53.361 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.361 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:53.619 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.619 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:53.619 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.619 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.619 09:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.907 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:54.474 09:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:54.733 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.300 00:20:55.300 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:55.300 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.300 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:55.557 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.557 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.557 09:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.558 09:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.558 09:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.558 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:55.558 { 00:20:55.558 "cntlid": 137, 00:20:55.558 "qid": 0, 00:20:55.558 "state": "enabled", 00:20:55.558 "listen_address": { 00:20:55.558 "trtype": "TCP", 00:20:55.558 "adrfam": "IPv4", 00:20:55.558 "traddr": "10.0.0.2", 00:20:55.558 "trsvcid": "4420" 00:20:55.558 }, 00:20:55.558 "peer_address": { 00:20:55.558 "trtype": "TCP", 00:20:55.558 "adrfam": "IPv4", 00:20:55.558 "traddr": "10.0.0.1", 00:20:55.558 "trsvcid": "47110" 00:20:55.558 }, 00:20:55.558 "auth": { 00:20:55.558 "state": "completed", 00:20:55.558 "digest": "sha512", 00:20:55.558 "dhgroup": "ffdhe8192" 00:20:55.558 } 00:20:55.558 } 00:20:55.558 ]' 00:20:55.558 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:55.558 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.558 09:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:55.816 09:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.816 09:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:55.816 09:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.816 09:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.816 09:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.075 09:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:56.641 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:56.980 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.552 00:20:57.552 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.552 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:57.552 09:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:57.809 { 00:20:57.809 "cntlid": 139, 00:20:57.809 "qid": 0, 00:20:57.809 "state": "enabled", 00:20:57.809 "listen_address": { 00:20:57.809 "trtype": "TCP", 00:20:57.809 "adrfam": "IPv4", 00:20:57.809 "traddr": "10.0.0.2", 00:20:57.809 "trsvcid": "4420" 00:20:57.809 }, 00:20:57.809 "peer_address": { 00:20:57.809 "trtype": "TCP", 00:20:57.809 "adrfam": "IPv4", 00:20:57.809 "traddr": "10.0.0.1", 00:20:57.809 "trsvcid": "59994" 00:20:57.809 }, 00:20:57.809 "auth": { 00:20:57.809 "state": "completed", 00:20:57.809 "digest": "sha512", 00:20:57.809 "dhgroup": "ffdhe8192" 00:20:57.809 } 00:20:57.809 } 00:20:57.809 ]' 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.809 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:58.067 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.067 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.067 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.325 09:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:01:ODQ0ODQzNGMxOWUzNDAyYzU5NDU1ZGM3YTdiNzE1ODiDnkXM: 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:58.893 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key2 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.153 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.720 00:20:59.720 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:59.720 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.720 09:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:00.026 { 00:21:00.026 "cntlid": 141, 00:21:00.026 "qid": 0, 00:21:00.026 "state": "enabled", 00:21:00.026 "listen_address": { 00:21:00.026 "trtype": "TCP", 00:21:00.026 "adrfam": "IPv4", 00:21:00.026 "traddr": "10.0.0.2", 00:21:00.026 "trsvcid": "4420" 00:21:00.026 }, 00:21:00.026 "peer_address": { 00:21:00.026 "trtype": "TCP", 00:21:00.026 "adrfam": "IPv4", 00:21:00.026 "traddr": "10.0.0.1", 00:21:00.026 "trsvcid": "60020" 00:21:00.026 }, 00:21:00.026 "auth": { 00:21:00.026 "state": "completed", 00:21:00.026 "digest": "sha512", 00:21:00.026 "dhgroup": "ffdhe8192" 00:21:00.026 } 00:21:00.026 } 00:21:00.026 ]' 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.026 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.285 09:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:02:YmUyYWJiMjJhNTZjOTNlZDU1NWFiY2M0MGY0ZmRhZjFjYzRkMGM5MmFmNmFkYTBlDqIUHg==: 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.850 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key3 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.108 09:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.676 00:21:01.676 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:01.676 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:01.676 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.934 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.934 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:01.935 { 00:21:01.935 "cntlid": 143, 00:21:01.935 "qid": 0, 00:21:01.935 "state": "enabled", 00:21:01.935 "listen_address": { 00:21:01.935 "trtype": "TCP", 00:21:01.935 "adrfam": "IPv4", 00:21:01.935 "traddr": "10.0.0.2", 00:21:01.935 "trsvcid": "4420" 00:21:01.935 }, 00:21:01.935 "peer_address": { 00:21:01.935 "trtype": "TCP", 00:21:01.935 "adrfam": "IPv4", 00:21:01.935 "traddr": "10.0.0.1", 00:21:01.935 "trsvcid": "60052" 00:21:01.935 }, 00:21:01.935 "auth": { 00:21:01.935 "state": "completed", 00:21:01.935 "digest": "sha512", 00:21:01.935 "dhgroup": "ffdhe8192" 00:21:01.935 } 00:21:01.935 } 00:21:01.935 ]' 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.935 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.501 09:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:03:N2YxNTEyZWM4ODE3OTA0MzhiZjBlZjMyNWQxYzI1NTM2YTIzOTViMTA1NmNlNmM3NjE0MTEwN2Y2MDU3ZDI2NMs8rD8=: 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.069 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key0 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:03.349 09:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:03.917 00:21:03.917 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:03.917 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:03.917 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.176 { 00:21:04.176 "cntlid": 145, 00:21:04.176 "qid": 0, 00:21:04.176 "state": "enabled", 00:21:04.176 "listen_address": { 00:21:04.176 "trtype": "TCP", 00:21:04.176 "adrfam": "IPv4", 00:21:04.176 "traddr": "10.0.0.2", 00:21:04.176 "trsvcid": "4420" 00:21:04.176 }, 00:21:04.176 "peer_address": { 00:21:04.176 "trtype": "TCP", 00:21:04.176 "adrfam": "IPv4", 00:21:04.176 "traddr": "10.0.0.1", 00:21:04.176 "trsvcid": "60082" 00:21:04.176 }, 00:21:04.176 "auth": { 00:21:04.176 "state": "completed", 00:21:04.176 "digest": "sha512", 00:21:04.176 "dhgroup": "ffdhe8192" 00:21:04.176 } 00:21:04.176 } 00:21:04.176 ]' 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.176 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.435 09:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid c738663f-2662-4398-b539-15f14394251b --dhchap-secret DHHC-1:00:NmZmNWVlOWEwMDY4NDliZjBkYzQ5ZTUwNDE4ZDYxNTNkNzFjNTM0ZmFmNzhlNDIyXEhl8A==: 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --dhchap-key key1 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.002 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.571 request: 00:21:05.571 { 00:21:05.571 "name": "nvme0", 00:21:05.571 "trtype": "tcp", 00:21:05.571 "traddr": "10.0.0.2", 00:21:05.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b", 00:21:05.571 "adrfam": "ipv4", 00:21:05.571 "trsvcid": "4420", 00:21:05.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:05.571 "dhchap_key": "key2", 00:21:05.571 "method": "bdev_nvme_attach_controller", 00:21:05.571 "req_id": 1 00:21:05.571 } 00:21:05.571 Got JSON-RPC error response 00:21:05.571 response: 00:21:05.571 { 00:21:05.571 "code": -32602, 00:21:05.571 "message": "Invalid parameters" 00:21:05.571 } 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68600 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 68600 ']' 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 68600 00:21:05.571 09:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:05.571 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:05.571 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 68600 00:21:05.830 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:05.830 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:05.830 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 68600' 00:21:05.830 killing process with pid 68600 00:21:05.830 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 68600 00:21:05.830 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 68600 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.089 rmmod nvme_tcp 00:21:06.089 rmmod nvme_fabrics 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 68562 ']' 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 68562 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 68562 ']' 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 68562 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 68562 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 68562' 00:21:06.089 killing process with pid 68562 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 68562 00:21:06.089 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 68562 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:06.347 09:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Q3G /tmp/spdk.key-sha256.j4G /tmp/spdk.key-sha384.aSv /tmp/spdk.key-sha512.Pej /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:06.606 00:21:06.606 real 2m37.708s 00:21:06.606 user 6m8.613s 00:21:06.606 sys 0m31.916s 00:21:06.606 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:06.606 ************************************ 00:21:06.606 END TEST nvmf_auth_target 00:21:06.606 ************************************ 00:21:06.606 09:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.606 09:14:18 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:06.606 09:14:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:06.606 09:14:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:21:06.606 09:14:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:06.606 09:14:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.606 ************************************ 00:21:06.606 START TEST nvmf_bdevio_no_huge 00:21:06.606 ************************************ 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:06.606 * Looking for test storage... 00:21:06.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:06.606 09:14:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:06.606 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:06.606 Cannot find device "nvmf_tgt_br" 00:21:06.606 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:21:06.606 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.606 Cannot find device "nvmf_tgt_br2" 00:21:06.606 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:21:06.606 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:06.606 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:06.907 Cannot find device "nvmf_tgt_br" 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:06.907 Cannot find device "nvmf_tgt_br2" 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.907 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:07.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:21:07.174 00:21:07.174 --- 10.0.0.2 ping statistics --- 00:21:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.174 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:07.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:07.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:07.174 00:21:07.174 --- 10.0.0.3 ping statistics --- 00:21:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.174 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:07.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:07.174 00:21:07.174 --- 10.0.0.1 ping statistics --- 00:21:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.174 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71706 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71706 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 71706 ']' 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:07.174 09:14:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.174 [2024-05-15 09:14:19.438979] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:07.174 [2024-05-15 09:14:19.439323] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:07.174 [2024-05-15 09:14:19.597433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.432 [2024-05-15 09:14:19.766335] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.432 [2024-05-15 09:14:19.766889] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.432 [2024-05-15 09:14:19.767334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.432 [2024-05-15 09:14:19.767874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.432 [2024-05-15 09:14:19.768107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.432 [2024-05-15 09:14:19.768420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:07.432 [2024-05-15 09:14:19.768618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:07.432 [2024-05-15 09:14:19.768703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:07.432 [2024-05-15 09:14:19.768820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.997 [2024-05-15 09:14:20.408848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.997 Malloc0 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.997 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.255 [2024-05-15 09:14:20.460808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:08.255 [2024-05-15 09:14:20.461354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.255 { 00:21:08.255 "params": { 00:21:08.255 "name": "Nvme$subsystem", 00:21:08.255 "trtype": "$TEST_TRANSPORT", 00:21:08.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.255 "adrfam": "ipv4", 00:21:08.255 "trsvcid": "$NVMF_PORT", 00:21:08.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.255 "hdgst": ${hdgst:-false}, 00:21:08.255 "ddgst": ${ddgst:-false} 00:21:08.255 }, 00:21:08.255 "method": "bdev_nvme_attach_controller" 00:21:08.255 } 00:21:08.255 EOF 00:21:08.255 )") 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:08.255 09:14:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:08.255 "params": { 00:21:08.255 "name": "Nvme1", 00:21:08.255 "trtype": "tcp", 00:21:08.255 "traddr": "10.0.0.2", 00:21:08.255 "adrfam": "ipv4", 00:21:08.255 "trsvcid": "4420", 00:21:08.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.255 "hdgst": false, 00:21:08.255 "ddgst": false 00:21:08.255 }, 00:21:08.255 "method": "bdev_nvme_attach_controller" 00:21:08.255 }' 00:21:08.255 [2024-05-15 09:14:20.519724] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:08.255 [2024-05-15 09:14:20.520080] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71742 ] 00:21:08.255 [2024-05-15 09:14:20.672961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:08.514 [2024-05-15 09:14:20.851564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.514 [2024-05-15 09:14:20.851757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.514 [2024-05-15 09:14:20.851761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.773 I/O targets: 00:21:08.773 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:08.773 00:21:08.773 00:21:08.773 CUnit - A unit testing framework for C - Version 2.1-3 00:21:08.773 http://cunit.sourceforge.net/ 00:21:08.773 00:21:08.773 00:21:08.773 Suite: bdevio tests on: Nvme1n1 00:21:08.773 Test: blockdev write read block ...passed 00:21:08.773 Test: blockdev write zeroes read block ...passed 00:21:08.773 Test: blockdev write zeroes read no split ...passed 00:21:08.773 Test: blockdev write zeroes read split ...passed 00:21:08.773 Test: blockdev write zeroes read split partial ...passed 00:21:08.773 Test: blockdev reset ...[2024-05-15 09:14:21.134883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:08.773 [2024-05-15 09:14:21.135250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73370 (9): Bad file descriptor 00:21:08.773 [2024-05-15 09:14:21.152516] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:08.773 passed 00:21:08.773 Test: blockdev write read 8 blocks ...passed 00:21:08.773 Test: blockdev write read size > 128k ...passed 00:21:08.773 Test: blockdev write read invalid size ...passed 00:21:08.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:08.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:08.773 Test: blockdev write read max offset ...passed 00:21:08.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:08.773 Test: blockdev writev readv 8 blocks ...passed 00:21:08.773 Test: blockdev writev readv 30 x 1block ...passed 00:21:08.773 Test: blockdev writev readv block ...passed 00:21:08.773 Test: blockdev writev readv size > 128k ...passed 00:21:08.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:08.773 Test: blockdev comparev and writev ...[2024-05-15 09:14:21.161691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.161888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.162065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.162168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.162569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.162709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.162843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.162920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.163245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.163373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.163495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.163576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.163982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.164230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.773 [2024-05-15 09:14:21.164334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:08.773 passed 00:21:08.773 Test: blockdev nvme passthru rw ...passed 00:21:08.773 Test: blockdev nvme passthru vendor specific ...[2024-05-15 09:14:21.165261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.773 [2024-05-15 09:14:21.165397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.165649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.773 [2024-05-15 09:14:21.165773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.165991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.773 [2024-05-15 09:14:21.166141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:08.773 [2024-05-15 09:14:21.166362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.773 [2024-05-15 09:14:21.166486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:08.773 passed 00:21:08.773 Test: blockdev nvme admin passthru ...passed 00:21:08.773 Test: blockdev copy ...passed 00:21:08.773 00:21:08.773 Run Summary: Type Total Ran Passed Failed Inactive 00:21:08.773 suites 1 1 n/a 0 0 00:21:08.773 tests 23 23 23 0 0 00:21:08.773 asserts 152 152 152 0 n/a 00:21:08.773 00:21:08.773 Elapsed time = 0.190 seconds 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.341 rmmod nvme_tcp 00:21:09.341 rmmod nvme_fabrics 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71706 ']' 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71706 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 71706 ']' 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 71706 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 71706 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 71706' 00:21:09.341 killing process with pid 71706 00:21:09.341 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 71706 00:21:09.341 [2024-05-15 09:14:21.708040] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:14:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 71706 00:21:09.341 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:09.919 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.919 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.919 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.919 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.919 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.920 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.920 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.920 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.920 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:09.920 ************************************ 00:21:09.920 END TEST nvmf_bdevio_no_huge 00:21:09.920 ************************************ 00:21:09.920 00:21:09.920 real 0m3.350s 00:21:09.920 user 0m10.710s 00:21:09.920 sys 0m1.523s 00:21:09.920 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:09.920 09:14:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.920 09:14:22 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:09.920 09:14:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:09.920 09:14:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:09.920 09:14:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:09.920 ************************************ 00:21:09.920 START TEST nvmf_tls 00:21:09.920 ************************************ 00:21:09.920 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:09.920 * Looking for test storage... 00:21:09.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:09.920 09:14:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.181 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:10.182 Cannot find device "nvmf_tgt_br" 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.182 Cannot find device "nvmf_tgt_br2" 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:10.182 Cannot find device "nvmf_tgt_br" 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:10.182 Cannot find device "nvmf_tgt_br2" 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.182 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:10.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:21:10.440 00:21:10.440 --- 10.0.0.2 ping statistics --- 00:21:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.440 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:10.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:10.440 00:21:10.440 --- 10.0.0.3 ping statistics --- 00:21:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.440 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:10.440 00:21:10.440 --- 10.0.0.1 ping statistics --- 00:21:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.440 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=71922 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 71922 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 71922 ']' 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:10.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:10.440 09:14:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.440 [2024-05-15 09:14:22.852898] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:10.440 [2024-05-15 09:14:22.853168] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.697 [2024-05-15 09:14:22.991531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.698 [2024-05-15 09:14:23.086344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.698 [2024-05-15 09:14:23.086572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.698 [2024-05-15 09:14:23.086666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.698 [2024-05-15 09:14:23.086712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.698 [2024-05-15 09:14:23.086739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.698 [2024-05-15 09:14:23.086786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:11.631 09:14:23 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:11.888 true 00:21:11.888 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:11.888 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:11.888 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:11.888 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:11.888 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:12.145 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:12.145 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.427 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:12.427 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:12.427 09:14:24 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:12.685 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:12.685 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.943 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:12.943 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:12.943 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:12.943 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.201 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:13.201 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:13.201 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:13.458 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.458 09:14:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:13.716 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:13.716 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:13.716 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:13.975 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.975 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:14.234 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.zwj9DiJZQl 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.HHYghtMkqL 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.zwj9DiJZQl 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HHYghtMkqL 00:21:14.490 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:14.747 09:14:26 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:15.005 09:14:27 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.zwj9DiJZQl 00:21:15.005 09:14:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zwj9DiJZQl 00:21:15.005 09:14:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.263 [2024-05-15 09:14:27.660047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.263 09:14:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:15.541 09:14:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:15.801 [2024-05-15 09:14:28.096088] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:15.801 [2024-05-15 09:14:28.096427] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.801 [2024-05-15 09:14:28.096708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.801 09:14:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.060 malloc0 00:21:16.060 09:14:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.319 09:14:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zwj9DiJZQl 00:21:16.319 [2024-05-15 09:14:28.725378] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.319 09:14:28 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zwj9DiJZQl 00:21:28.516 Initializing NVMe Controllers 00:21:28.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:28.516 Initialization complete. Launching workers. 00:21:28.516 ======================================================== 00:21:28.516 Latency(us) 00:21:28.516 Device Information : IOPS MiB/s Average min max 00:21:28.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10900.16 42.58 5872.47 949.26 9128.54 00:21:28.516 ======================================================== 00:21:28.516 Total : 10900.16 42.58 5872.47 949.26 9128.54 00:21:28.516 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwj9DiJZQl 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zwj9DiJZQl' 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72153 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72153 /var/tmp/bdevperf.sock 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72153 ']' 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:28.516 09:14:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.516 [2024-05-15 09:14:39.000119] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:28.516 [2024-05-15 09:14:39.000653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72153 ] 00:21:28.516 [2024-05-15 09:14:39.138764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.516 [2024-05-15 09:14:39.259791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.517 09:14:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:28.517 09:14:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:28.517 09:14:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zwj9DiJZQl 00:21:28.517 [2024-05-15 09:14:40.225219] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.517 [2024-05-15 09:14:40.225587] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:28.517 TLSTESTn1 00:21:28.517 09:14:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.517 Running I/O for 10 seconds... 00:21:38.514 00:21:38.514 Latency(us) 00:21:38.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.514 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:38.514 Verification LBA range: start 0x0 length 0x2000 00:21:38.514 TLSTESTn1 : 10.01 5425.55 21.19 0.00 0.00 23551.51 4930.80 20597.03 00:21:38.514 =================================================================================================================== 00:21:38.514 Total : 5425.55 21.19 0.00 0.00 23551.51 4930.80 20597.03 00:21:38.514 0 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72153 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72153 ']' 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72153 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:38.514 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72153 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72153' 00:21:38.515 killing process with pid 72153 00:21:38.515 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.515 00:21:38.515 Latency(us) 00:21:38.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.515 =================================================================================================================== 00:21:38.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72153 00:21:38.515 [2024-05-15 09:14:50.499143] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72153 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHYghtMkqL 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHYghtMkqL 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHYghtMkqL 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HHYghtMkqL' 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72281 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72281 /var/tmp/bdevperf.sock 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72281 ']' 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:38.515 09:14:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.515 [2024-05-15 09:14:50.775234] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:38.515 [2024-05-15 09:14:50.775535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72281 ] 00:21:38.515 [2024-05-15 09:14:50.911175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.773 [2024-05-15 09:14:51.015532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.709 09:14:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:39.709 09:14:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:39.709 09:14:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHYghtMkqL 00:21:39.709 [2024-05-15 09:14:51.997529] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.709 [2024-05-15 09:14:51.997929] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.709 [2024-05-15 09:14:52.008730] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.709 [2024-05-15 09:14:52.009601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbce9f0 (107): Transport endpoint is not connected 00:21:39.709 [2024-05-15 09:14:52.010591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbce9f0 (9): Bad file descriptor 00:21:39.709 [2024-05-15 09:14:52.011587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.709 [2024-05-15 09:14:52.011733] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.709 [2024-05-15 09:14:52.011823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.709 request: 00:21:39.709 { 00:21:39.709 "name": "TLSTEST", 00:21:39.709 "trtype": "tcp", 00:21:39.709 "traddr": "10.0.0.2", 00:21:39.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.709 "adrfam": "ipv4", 00:21:39.709 "trsvcid": "4420", 00:21:39.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.709 "psk": "/tmp/tmp.HHYghtMkqL", 00:21:39.709 "method": "bdev_nvme_attach_controller", 00:21:39.709 "req_id": 1 00:21:39.709 } 00:21:39.709 Got JSON-RPC error response 00:21:39.709 response: 00:21:39.709 { 00:21:39.709 "code": -32602, 00:21:39.709 "message": "Invalid parameters" 00:21:39.709 } 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72281 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72281 ']' 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72281 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72281 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72281' 00:21:39.709 killing process with pid 72281 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72281 00:21:39.709 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.709 00:21:39.709 Latency(us) 00:21:39.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.709 =================================================================================================================== 00:21:39.709 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.709 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72281 00:21:39.709 [2024-05-15 09:14:52.064387] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zwj9DiJZQl 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zwj9DiJZQl 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zwj9DiJZQl 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zwj9DiJZQl' 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72314 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72314 /var/tmp/bdevperf.sock 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72314 ']' 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:39.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:39.968 09:14:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.968 [2024-05-15 09:14:52.367255] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:39.968 [2024-05-15 09:14:52.367690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72314 ] 00:21:40.228 [2024-05-15 09:14:52.515364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.228 [2024-05-15 09:14:52.619833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.zwj9DiJZQl 00:21:41.165 [2024-05-15 09:14:53.532629] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.165 [2024-05-15 09:14:53.532978] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.165 [2024-05-15 09:14:53.538753] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.165 [2024-05-15 09:14:53.538988] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.165 [2024-05-15 09:14:53.539141] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.165 [2024-05-15 09:14:53.539333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0c9f0 (107): Transport endpoint is not connected 00:21:41.165 [2024-05-15 09:14:53.540326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0c9f0 (9): Bad file descriptor 00:21:41.165 [2024-05-15 09:14:53.541321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.165 [2024-05-15 09:14:53.541460] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.165 [2024-05-15 09:14:53.541537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.165 request: 00:21:41.165 { 00:21:41.165 "name": "TLSTEST", 00:21:41.165 "trtype": "tcp", 00:21:41.165 "traddr": "10.0.0.2", 00:21:41.165 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:41.165 "adrfam": "ipv4", 00:21:41.165 "trsvcid": "4420", 00:21:41.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.165 "psk": "/tmp/tmp.zwj9DiJZQl", 00:21:41.165 "method": "bdev_nvme_attach_controller", 00:21:41.165 "req_id": 1 00:21:41.165 } 00:21:41.165 Got JSON-RPC error response 00:21:41.165 response: 00:21:41.165 { 00:21:41.165 "code": -32602, 00:21:41.165 "message": "Invalid parameters" 00:21:41.165 } 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72314 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72314 ']' 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72314 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72314 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72314' 00:21:41.165 killing process with pid 72314 00:21:41.165 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72314 00:21:41.165 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.165 00:21:41.165 Latency(us) 00:21:41.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.165 =================================================================================================================== 00:21:41.165 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.165 [2024-05-15 09:14:53.597869] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72314 00:21:41.165 scheduled for removal in v24.09 hit 1 times 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwj9DiJZQl 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwj9DiJZQl 00:21:41.423 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwj9DiJZQl 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zwj9DiJZQl' 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72336 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72336 /var/tmp/bdevperf.sock 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72336 ']' 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:41.424 09:14:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.682 [2024-05-15 09:14:53.874026] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:41.682 [2024-05-15 09:14:53.874303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72336 ] 00:21:41.682 [2024-05-15 09:14:54.009954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.682 [2024-05-15 09:14:54.110875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.630 09:14:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:42.630 09:14:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:42.630 09:14:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zwj9DiJZQl 00:21:42.630 [2024-05-15 09:14:55.056401] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.630 [2024-05-15 09:14:55.056780] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:42.630 [2024-05-15 09:14:55.061520] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.630 [2024-05-15 09:14:55.061743] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.630 [2024-05-15 09:14:55.061915] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:42.630 [2024-05-15 09:14:55.062249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc359f0 (107): Transport endpoint is not connected 00:21:42.630 [2024-05-15 09:14:55.063238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc359f0 (9): Bad file descriptor 00:21:42.631 [2024-05-15 09:14:55.064235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.631 [2024-05-15 09:14:55.064377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:42.631 [2024-05-15 09:14:55.064458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.631 request: 00:21:42.631 { 00:21:42.631 "name": "TLSTEST", 00:21:42.631 "trtype": "tcp", 00:21:42.631 "traddr": "10.0.0.2", 00:21:42.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.631 "adrfam": "ipv4", 00:21:42.631 "trsvcid": "4420", 00:21:42.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.631 "psk": "/tmp/tmp.zwj9DiJZQl", 00:21:42.631 "method": "bdev_nvme_attach_controller", 00:21:42.631 "req_id": 1 00:21:42.631 } 00:21:42.631 Got JSON-RPC error response 00:21:42.631 response: 00:21:42.631 { 00:21:42.631 "code": -32602, 00:21:42.631 "message": "Invalid parameters" 00:21:42.631 } 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72336 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72336 ']' 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72336 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72336 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72336' 00:21:42.889 killing process with pid 72336 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72336 00:21:42.889 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.889 00:21:42.889 Latency(us) 00:21:42.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.889 =================================================================================================================== 00:21:42.889 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72336 00:21:42.889 [2024-05-15 09:14:55.114989] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:42.889 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72364 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72364 /var/tmp/bdevperf.sock 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72364 ']' 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 09:14:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.147 [2024-05-15 09:14:55.395276] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:43.147 [2024-05-15 09:14:55.395910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72364 ] 00:21:43.147 [2024-05-15 09:14:55.541668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.405 [2024-05-15 09:14:55.644849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.971 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:43.971 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:43.971 09:14:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:44.230 [2024-05-15 09:14:56.569304] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:44.230 [2024-05-15 09:14:56.570797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7bce0 (9): Bad file descriptor 00:21:44.230 [2024-05-15 09:14:56.571794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:44.230 [2024-05-15 09:14:56.571928] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:44.230 [2024-05-15 09:14:56.572009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.230 request: 00:21:44.230 { 00:21:44.230 "name": "TLSTEST", 00:21:44.230 "trtype": "tcp", 00:21:44.230 "traddr": "10.0.0.2", 00:21:44.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.230 "adrfam": "ipv4", 00:21:44.230 "trsvcid": "4420", 00:21:44.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.230 "method": "bdev_nvme_attach_controller", 00:21:44.230 "req_id": 1 00:21:44.230 } 00:21:44.230 Got JSON-RPC error response 00:21:44.230 response: 00:21:44.230 { 00:21:44.230 "code": -32602, 00:21:44.230 "message": "Invalid parameters" 00:21:44.230 } 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72364 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72364 ']' 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72364 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72364 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72364' 00:21:44.230 killing process with pid 72364 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72364 00:21:44.230 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.230 00:21:44.230 Latency(us) 00:21:44.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.230 =================================================================================================================== 00:21:44.230 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.230 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72364 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 71922 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 71922 ']' 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 71922 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 71922 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 71922' 00:21:44.488 killing process with pid 71922 00:21:44.488 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 71922 00:21:44.488 [2024-05-15 09:14:56.896901] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:44.488 [2024-05-15 09:14:56.897106] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 09:14:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 71922 00:21:44.488 removal in v24.09 hit 1 times 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.6Yv0ygeGUW 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.6Yv0ygeGUW 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72401 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72401 00:21:44.745 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72401 ']' 00:21:45.001 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.001 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:45.001 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.001 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:45.001 09:14:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.001 09:14:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.001 [2024-05-15 09:14:57.275268] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:45.001 [2024-05-15 09:14:57.275901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.002 [2024-05-15 09:14:57.428517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.258 [2024-05-15 09:14:57.561664] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.258 [2024-05-15 09:14:57.561984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.258 [2024-05-15 09:14:57.562166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.258 [2024-05-15 09:14:57.562319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.258 [2024-05-15 09:14:57.562449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.258 [2024-05-15 09:14:57.562562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.6Yv0ygeGUW 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6Yv0ygeGUW 00:21:46.189 09:14:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:46.446 [2024-05-15 09:14:58.706016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.447 09:14:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:46.705 09:14:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:46.965 [2024-05-15 09:14:59.350170] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:46.965 [2024-05-15 09:14:59.350626] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.965 [2024-05-15 09:14:59.350945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.965 09:14:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:47.530 malloc0 00:21:47.530 09:14:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:21:47.818 [2024-05-15 09:15:00.226775] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:47.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Yv0ygeGUW 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6Yv0ygeGUW' 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72461 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72461 /var/tmp/bdevperf.sock 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72461 ']' 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:47.818 09:15:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.077 [2024-05-15 09:15:00.296601] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:21:48.077 [2024-05-15 09:15:00.296906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72461 ] 00:21:48.077 [2024-05-15 09:15:00.435637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.336 [2024-05-15 09:15:00.554361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.902 09:15:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:48.902 09:15:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:48.902 09:15:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:21:49.161 [2024-05-15 09:15:01.390627] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.161 [2024-05-15 09:15:01.390958] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:49.161 TLSTESTn1 00:21:49.161 09:15:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:49.161 Running I/O for 10 seconds... 00:22:01.364 00:22:01.364 Latency(us) 00:22:01.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.364 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.364 Verification LBA range: start 0x0 length 0x2000 00:22:01.364 TLSTESTn1 : 10.02 5470.32 21.37 0.00 0.00 23353.95 5710.99 17850.76 00:22:01.364 =================================================================================================================== 00:22:01.364 Total : 5470.32 21.37 0.00 0.00 23353.95 5710.99 17850.76 00:22:01.364 0 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72461 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72461 ']' 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72461 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72461 00:22:01.364 killing process with pid 72461 00:22:01.364 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.364 00:22:01.364 Latency(us) 00:22:01.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.364 =================================================================================================================== 00:22:01.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72461' 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72461 00:22:01.364 [2024-05-15 09:15:11.638676] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72461 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.6Yv0ygeGUW 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Yv0ygeGUW 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Yv0ygeGUW 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Yv0ygeGUW 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6Yv0ygeGUW' 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72596 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72596 /var/tmp/bdevperf.sock 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72596 ']' 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:01.364 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.365 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:01.365 09:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.365 [2024-05-15 09:15:11.912993] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:01.365 [2024-05-15 09:15:11.913756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:22:01.365 [2024-05-15 09:15:12.051736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.365 [2024-05-15 09:15:12.153019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.365 09:15:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:01.365 09:15:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:01.365 09:15:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:22:01.365 [2024-05-15 09:15:13.122560] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.365 [2024-05-15 09:15:13.122916] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:01.365 [2024-05-15 09:15:13.123007] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.6Yv0ygeGUW 00:22:01.365 request: 00:22:01.365 { 00:22:01.365 "name": "TLSTEST", 00:22:01.365 "trtype": "tcp", 00:22:01.365 "traddr": "10.0.0.2", 00:22:01.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.365 "adrfam": "ipv4", 00:22:01.365 "trsvcid": "4420", 00:22:01.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.365 "psk": "/tmp/tmp.6Yv0ygeGUW", 00:22:01.365 "method": "bdev_nvme_attach_controller", 00:22:01.365 "req_id": 1 00:22:01.365 } 00:22:01.365 Got JSON-RPC error response 00:22:01.365 response: 00:22:01.365 { 00:22:01.365 "code": -1, 00:22:01.365 "message": "Operation not permitted" 00:22:01.365 } 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72596 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72596 ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72596 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72596 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72596' 00:22:01.365 killing process with pid 72596 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72596 00:22:01.365 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.365 00:22:01.365 Latency(us) 00:22:01.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.365 =================================================================================================================== 00:22:01.365 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72596 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 72401 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72401 ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72401 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72401 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72401' 00:22:01.365 killing process with pid 72401 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72401 00:22:01.365 [2024-05-15 09:15:13.441227] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72401 00:22:01.365 [2024-05-15 09:15:13.441465] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72629 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72629 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72629 ']' 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:01.365 09:15:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.365 [2024-05-15 09:15:13.726674] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:01.365 [2024-05-15 09:15:13.726959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.623 [2024-05-15 09:15:13.863519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.623 [2024-05-15 09:15:13.967079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.623 [2024-05-15 09:15:13.967278] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.623 [2024-05-15 09:15:13.967393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.623 [2024-05-15 09:15:13.967453] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.623 [2024-05-15 09:15:13.967487] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.623 [2024-05-15 09:15:13.967560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.254 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:02.254 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:02.254 09:15:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.254 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:02.254 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.6Yv0ygeGUW 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6Yv0ygeGUW 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.6Yv0ygeGUW 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6Yv0ygeGUW 00:22:02.512 09:15:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.512 [2024-05-15 09:15:14.950915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.771 09:15:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.771 09:15:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.029 [2024-05-15 09:15:15.374947] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:03.029 [2024-05-15 09:15:15.375290] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.029 [2024-05-15 09:15:15.375590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.029 09:15:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.287 malloc0 00:22:03.287 09:15:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:03.545 09:15:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:22:03.804 [2024-05-15 09:15:16.148764] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:03.804 [2024-05-15 09:15:16.148812] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:03.804 [2024-05-15 09:15:16.148845] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:03.804 request: 00:22:03.804 { 00:22:03.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.804 "host": "nqn.2016-06.io.spdk:host1", 00:22:03.804 "psk": "/tmp/tmp.6Yv0ygeGUW", 00:22:03.804 "method": "nvmf_subsystem_add_host", 00:22:03.804 "req_id": 1 00:22:03.804 } 00:22:03.804 Got JSON-RPC error response 00:22:03.804 response: 00:22:03.804 { 00:22:03.804 "code": -32603, 00:22:03.804 "message": "Internal error" 00:22:03.804 } 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 72629 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72629 ']' 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72629 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72629 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:03.804 killing process with pid 72629 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72629' 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72629 00:22:03.804 [2024-05-15 09:15:16.196011] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:03.804 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72629 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.6Yv0ygeGUW 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72691 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72691 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72691 ']' 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:04.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:04.064 09:15:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.064 [2024-05-15 09:15:16.482530] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:04.064 [2024-05-15 09:15:16.482625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.323 [2024-05-15 09:15:16.621188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.323 [2024-05-15 09:15:16.725079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.323 [2024-05-15 09:15:16.725150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.323 [2024-05-15 09:15:16.725161] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.323 [2024-05-15 09:15:16.725171] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.323 [2024-05-15 09:15:16.725179] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.323 [2024-05-15 09:15:16.725206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.890 09:15:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:04.890 09:15:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:04.890 09:15:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.890 09:15:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:04.890 09:15:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.147 09:15:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.147 09:15:17 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.6Yv0ygeGUW 00:22:05.147 09:15:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6Yv0ygeGUW 00:22:05.148 09:15:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.405 [2024-05-15 09:15:17.635068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.406 09:15:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.406 09:15:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.664 [2024-05-15 09:15:18.075096] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:05.664 [2024-05-15 09:15:18.075228] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.664 [2024-05-15 09:15:18.075422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.664 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.922 malloc0 00:22:05.922 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.180 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:22:06.439 [2024-05-15 09:15:18.796531] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72746 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72746 /var/tmp/bdevperf.sock 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72746 ']' 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:06.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:06.439 09:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.439 [2024-05-15 09:15:18.860962] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:06.439 [2024-05-15 09:15:18.861087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72746 ] 00:22:06.697 [2024-05-15 09:15:18.999070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.697 [2024-05-15 09:15:19.099798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.632 09:15:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:07.632 09:15:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:07.632 09:15:19 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:22:07.632 [2024-05-15 09:15:19.970333] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.632 [2024-05-15 09:15:19.970472] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:07.632 TLSTESTn1 00:22:07.632 09:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:08.199 09:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:08.199 "subsystems": [ 00:22:08.199 { 00:22:08.199 "subsystem": "keyring", 00:22:08.199 "config": [] 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "subsystem": "iobuf", 00:22:08.199 "config": [ 00:22:08.199 { 00:22:08.199 "method": "iobuf_set_options", 00:22:08.199 "params": { 00:22:08.199 "small_pool_count": 8192, 00:22:08.199 "large_pool_count": 1024, 00:22:08.199 "small_bufsize": 8192, 00:22:08.199 "large_bufsize": 135168 00:22:08.199 } 00:22:08.199 } 00:22:08.199 ] 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "subsystem": "sock", 00:22:08.199 "config": [ 00:22:08.199 { 00:22:08.199 "method": "sock_impl_set_options", 00:22:08.199 "params": { 00:22:08.199 "impl_name": "uring", 00:22:08.199 "recv_buf_size": 2097152, 00:22:08.199 "send_buf_size": 2097152, 00:22:08.199 "enable_recv_pipe": true, 00:22:08.199 "enable_quickack": false, 00:22:08.199 "enable_placement_id": 0, 00:22:08.199 "enable_zerocopy_send_server": false, 00:22:08.199 "enable_zerocopy_send_client": false, 00:22:08.199 "zerocopy_threshold": 0, 00:22:08.199 "tls_version": 0, 00:22:08.199 "enable_ktls": false 00:22:08.199 } 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "method": "sock_impl_set_options", 00:22:08.199 "params": { 00:22:08.199 "impl_name": "posix", 00:22:08.199 "recv_buf_size": 2097152, 00:22:08.199 "send_buf_size": 2097152, 00:22:08.199 "enable_recv_pipe": true, 00:22:08.199 "enable_quickack": false, 00:22:08.199 "enable_placement_id": 0, 00:22:08.199 "enable_zerocopy_send_server": true, 00:22:08.199 "enable_zerocopy_send_client": false, 00:22:08.199 "zerocopy_threshold": 0, 00:22:08.199 "tls_version": 0, 00:22:08.199 "enable_ktls": false 00:22:08.199 } 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "method": "sock_impl_set_options", 00:22:08.199 "params": { 00:22:08.199 "impl_name": "ssl", 00:22:08.199 "recv_buf_size": 4096, 00:22:08.199 "send_buf_size": 4096, 00:22:08.199 "enable_recv_pipe": true, 00:22:08.199 "enable_quickack": false, 00:22:08.199 "enable_placement_id": 0, 00:22:08.199 "enable_zerocopy_send_server": true, 00:22:08.199 "enable_zerocopy_send_client": false, 00:22:08.199 "zerocopy_threshold": 0, 00:22:08.199 "tls_version": 0, 00:22:08.199 "enable_ktls": false 00:22:08.199 } 00:22:08.199 } 00:22:08.199 ] 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "subsystem": "vmd", 00:22:08.199 "config": [] 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "subsystem": "accel", 00:22:08.199 "config": [ 00:22:08.199 { 00:22:08.199 "method": "accel_set_options", 00:22:08.199 "params": { 00:22:08.199 "small_cache_size": 128, 00:22:08.199 "large_cache_size": 16, 00:22:08.199 "task_count": 2048, 00:22:08.199 "sequence_count": 2048, 00:22:08.199 "buf_count": 2048 00:22:08.199 } 00:22:08.199 } 00:22:08.199 ] 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "subsystem": "bdev", 00:22:08.199 "config": [ 00:22:08.199 { 00:22:08.199 "method": "bdev_set_options", 00:22:08.199 "params": { 00:22:08.199 "bdev_io_pool_size": 65535, 00:22:08.199 "bdev_io_cache_size": 256, 00:22:08.199 "bdev_auto_examine": true, 00:22:08.199 "iobuf_small_cache_size": 128, 00:22:08.199 "iobuf_large_cache_size": 16 00:22:08.199 } 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "method": "bdev_raid_set_options", 00:22:08.199 "params": { 00:22:08.199 "process_window_size_kb": 1024 00:22:08.199 } 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "method": "bdev_iscsi_set_options", 00:22:08.199 "params": { 00:22:08.199 "timeout_sec": 30 00:22:08.199 } 00:22:08.199 }, 00:22:08.199 { 00:22:08.199 "method": "bdev_nvme_set_options", 00:22:08.199 "params": { 00:22:08.199 "action_on_timeout": "none", 00:22:08.199 "timeout_us": 0, 00:22:08.199 "timeout_admin_us": 0, 00:22:08.199 "keep_alive_timeout_ms": 10000, 00:22:08.199 "arbitration_burst": 0, 00:22:08.199 "low_priority_weight": 0, 00:22:08.199 "medium_priority_weight": 0, 00:22:08.199 "high_priority_weight": 0, 00:22:08.199 "nvme_adminq_poll_period_us": 10000, 00:22:08.199 "nvme_ioq_poll_period_us": 0, 00:22:08.199 "io_queue_requests": 0, 00:22:08.199 "delay_cmd_submit": true, 00:22:08.199 "transport_retry_count": 4, 00:22:08.199 "bdev_retry_count": 3, 00:22:08.199 "transport_ack_timeout": 0, 00:22:08.199 "ctrlr_loss_timeout_sec": 0, 00:22:08.199 "reconnect_delay_sec": 0, 00:22:08.199 "fast_io_fail_timeout_sec": 0, 00:22:08.199 "disable_auto_failback": false, 00:22:08.200 "generate_uuids": false, 00:22:08.200 "transport_tos": 0, 00:22:08.200 "nvme_error_stat": false, 00:22:08.200 "rdma_srq_size": 0, 00:22:08.200 "io_path_stat": false, 00:22:08.200 "allow_accel_sequence": false, 00:22:08.200 "rdma_max_cq_size": 0, 00:22:08.200 "rdma_cm_event_timeout_ms": 0, 00:22:08.200 "dhchap_digests": [ 00:22:08.200 "sha256", 00:22:08.200 "sha384", 00:22:08.200 "sha512" 00:22:08.200 ], 00:22:08.200 "dhchap_dhgroups": [ 00:22:08.200 "null", 00:22:08.200 "ffdhe2048", 00:22:08.200 "ffdhe3072", 00:22:08.200 "ffdhe4096", 00:22:08.200 "ffdhe6144", 00:22:08.200 "ffdhe8192" 00:22:08.200 ] 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "bdev_nvme_set_hotplug", 00:22:08.200 "params": { 00:22:08.200 "period_us": 100000, 00:22:08.200 "enable": false 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "bdev_malloc_create", 00:22:08.200 "params": { 00:22:08.200 "name": "malloc0", 00:22:08.200 "num_blocks": 8192, 00:22:08.200 "block_size": 4096, 00:22:08.200 "physical_block_size": 4096, 00:22:08.200 "uuid": "1265b21f-49f3-48f9-85a5-c5ca1289dd6d", 00:22:08.200 "optimal_io_boundary": 0 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "bdev_wait_for_examine" 00:22:08.200 } 00:22:08.200 ] 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "subsystem": "nbd", 00:22:08.200 "config": [] 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "subsystem": "scheduler", 00:22:08.200 "config": [ 00:22:08.200 { 00:22:08.200 "method": "framework_set_scheduler", 00:22:08.200 "params": { 00:22:08.200 "name": "static" 00:22:08.200 } 00:22:08.200 } 00:22:08.200 ] 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "subsystem": "nvmf", 00:22:08.200 "config": [ 00:22:08.200 { 00:22:08.200 "method": "nvmf_set_config", 00:22:08.200 "params": { 00:22:08.200 "discovery_filter": "match_any", 00:22:08.200 "admin_cmd_passthru": { 00:22:08.200 "identify_ctrlr": false 00:22:08.200 } 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_set_max_subsystems", 00:22:08.200 "params": { 00:22:08.200 "max_subsystems": 1024 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_set_crdt", 00:22:08.200 "params": { 00:22:08.200 "crdt1": 0, 00:22:08.200 "crdt2": 0, 00:22:08.200 "crdt3": 0 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_create_transport", 00:22:08.200 "params": { 00:22:08.200 "trtype": "TCP", 00:22:08.200 "max_queue_depth": 128, 00:22:08.200 "max_io_qpairs_per_ctrlr": 127, 00:22:08.200 "in_capsule_data_size": 4096, 00:22:08.200 "max_io_size": 131072, 00:22:08.200 "io_unit_size": 131072, 00:22:08.200 "max_aq_depth": 128, 00:22:08.200 "num_shared_buffers": 511, 00:22:08.200 "buf_cache_size": 4294967295, 00:22:08.200 "dif_insert_or_strip": false, 00:22:08.200 "zcopy": false, 00:22:08.200 "c2h_success": false, 00:22:08.200 "sock_priority": 0, 00:22:08.200 "abort_timeout_sec": 1, 00:22:08.200 "ack_timeout": 0, 00:22:08.200 "data_wr_pool_size": 0 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_create_subsystem", 00:22:08.200 "params": { 00:22:08.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.200 "allow_any_host": false, 00:22:08.200 "serial_number": "SPDK00000000000001", 00:22:08.200 "model_number": "SPDK bdev Controller", 00:22:08.200 "max_namespaces": 10, 00:22:08.200 "min_cntlid": 1, 00:22:08.200 "max_cntlid": 65519, 00:22:08.200 "ana_reporting": false 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_subsystem_add_host", 00:22:08.200 "params": { 00:22:08.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.200 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.200 "psk": "/tmp/tmp.6Yv0ygeGUW" 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_subsystem_add_ns", 00:22:08.200 "params": { 00:22:08.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.200 "namespace": { 00:22:08.200 "nsid": 1, 00:22:08.200 "bdev_name": "malloc0", 00:22:08.200 "nguid": "1265B21F49F348F985A5C5CA1289DD6D", 00:22:08.200 "uuid": "1265b21f-49f3-48f9-85a5-c5ca1289dd6d", 00:22:08.200 "no_auto_visible": false 00:22:08.200 } 00:22:08.200 } 00:22:08.200 }, 00:22:08.200 { 00:22:08.200 "method": "nvmf_subsystem_add_listener", 00:22:08.200 "params": { 00:22:08.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.200 "listen_address": { 00:22:08.200 "trtype": "TCP", 00:22:08.200 "adrfam": "IPv4", 00:22:08.200 "traddr": "10.0.0.2", 00:22:08.200 "trsvcid": "4420" 00:22:08.200 }, 00:22:08.200 "secure_channel": true 00:22:08.200 } 00:22:08.200 } 00:22:08.200 ] 00:22:08.200 } 00:22:08.200 ] 00:22:08.200 }' 00:22:08.200 09:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:08.459 09:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:08.459 "subsystems": [ 00:22:08.459 { 00:22:08.459 "subsystem": "keyring", 00:22:08.459 "config": [] 00:22:08.459 }, 00:22:08.459 { 00:22:08.459 "subsystem": "iobuf", 00:22:08.459 "config": [ 00:22:08.459 { 00:22:08.459 "method": "iobuf_set_options", 00:22:08.459 "params": { 00:22:08.459 "small_pool_count": 8192, 00:22:08.459 "large_pool_count": 1024, 00:22:08.459 "small_bufsize": 8192, 00:22:08.459 "large_bufsize": 135168 00:22:08.459 } 00:22:08.459 } 00:22:08.459 ] 00:22:08.459 }, 00:22:08.459 { 00:22:08.459 "subsystem": "sock", 00:22:08.459 "config": [ 00:22:08.459 { 00:22:08.459 "method": "sock_impl_set_options", 00:22:08.459 "params": { 00:22:08.459 "impl_name": "uring", 00:22:08.459 "recv_buf_size": 2097152, 00:22:08.459 "send_buf_size": 2097152, 00:22:08.459 "enable_recv_pipe": true, 00:22:08.459 "enable_quickack": false, 00:22:08.459 "enable_placement_id": 0, 00:22:08.459 "enable_zerocopy_send_server": false, 00:22:08.459 "enable_zerocopy_send_client": false, 00:22:08.459 "zerocopy_threshold": 0, 00:22:08.459 "tls_version": 0, 00:22:08.459 "enable_ktls": false 00:22:08.459 } 00:22:08.459 }, 00:22:08.460 { 00:22:08.460 "method": "sock_impl_set_options", 00:22:08.460 "params": { 00:22:08.460 "impl_name": "posix", 00:22:08.460 "recv_buf_size": 2097152, 00:22:08.460 "send_buf_size": 2097152, 00:22:08.460 "enable_recv_pipe": true, 00:22:08.460 "enable_quickack": false, 00:22:08.460 "enable_placement_id": 0, 00:22:08.460 "enable_zerocopy_send_server": true, 00:22:08.460 "enable_zerocopy_send_client": false, 00:22:08.460 "zerocopy_threshold": 0, 00:22:08.460 "tls_version": 0, 00:22:08.460 "enable_ktls": false 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "sock_impl_set_options", 00:22:08.460 "params": { 00:22:08.460 "impl_name": "ssl", 00:22:08.460 "recv_buf_size": 4096, 00:22:08.460 "send_buf_size": 4096, 00:22:08.460 "enable_recv_pipe": true, 00:22:08.460 "enable_quickack": false, 00:22:08.460 "enable_placement_id": 0, 00:22:08.460 "enable_zerocopy_send_server": true, 00:22:08.460 "enable_zerocopy_send_client": false, 00:22:08.460 "zerocopy_threshold": 0, 00:22:08.460 "tls_version": 0, 00:22:08.460 "enable_ktls": false 00:22:08.460 } 00:22:08.460 } 00:22:08.460 ] 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "subsystem": "vmd", 00:22:08.460 "config": [] 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "subsystem": "accel", 00:22:08.460 "config": [ 00:22:08.460 { 00:22:08.460 "method": "accel_set_options", 00:22:08.460 "params": { 00:22:08.460 "small_cache_size": 128, 00:22:08.460 "large_cache_size": 16, 00:22:08.460 "task_count": 2048, 00:22:08.460 "sequence_count": 2048, 00:22:08.460 "buf_count": 2048 00:22:08.460 } 00:22:08.460 } 00:22:08.460 ] 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "subsystem": "bdev", 00:22:08.460 "config": [ 00:22:08.460 { 00:22:08.460 "method": "bdev_set_options", 00:22:08.460 "params": { 00:22:08.460 "bdev_io_pool_size": 65535, 00:22:08.460 "bdev_io_cache_size": 256, 00:22:08.460 "bdev_auto_examine": true, 00:22:08.460 "iobuf_small_cache_size": 128, 00:22:08.460 "iobuf_large_cache_size": 16 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "bdev_raid_set_options", 00:22:08.460 "params": { 00:22:08.460 "process_window_size_kb": 1024 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "bdev_iscsi_set_options", 00:22:08.460 "params": { 00:22:08.460 "timeout_sec": 30 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "bdev_nvme_set_options", 00:22:08.460 "params": { 00:22:08.460 "action_on_timeout": "none", 00:22:08.460 "timeout_us": 0, 00:22:08.460 "timeout_admin_us": 0, 00:22:08.460 "keep_alive_timeout_ms": 10000, 00:22:08.460 "arbitration_burst": 0, 00:22:08.460 "low_priority_weight": 0, 00:22:08.460 "medium_priority_weight": 0, 00:22:08.460 "high_priority_weight": 0, 00:22:08.460 "nvme_adminq_poll_period_us": 10000, 00:22:08.460 "nvme_ioq_poll_period_us": 0, 00:22:08.460 "io_queue_requests": 512, 00:22:08.460 "delay_cmd_submit": true, 00:22:08.460 "transport_retry_count": 4, 00:22:08.460 "bdev_retry_count": 3, 00:22:08.460 "transport_ack_timeout": 0, 00:22:08.460 "ctrlr_loss_timeout_sec": 0, 00:22:08.460 "reconnect_delay_sec": 0, 00:22:08.460 "fast_io_fail_timeout_sec": 0, 00:22:08.460 "disable_auto_failback": false, 00:22:08.460 "generate_uuids": false, 00:22:08.460 "transport_tos": 0, 00:22:08.460 "nvme_error_stat": false, 00:22:08.460 "rdma_srq_size": 0, 00:22:08.460 "io_path_stat": false, 00:22:08.460 "allow_accel_sequence": false, 00:22:08.460 "rdma_max_cq_size": 0, 00:22:08.460 "rdma_cm_event_timeout_ms": 0, 00:22:08.460 "dhchap_digests": [ 00:22:08.460 "sha256", 00:22:08.460 "sha384", 00:22:08.460 "sha512" 00:22:08.460 ], 00:22:08.460 "dhchap_dhgroups": [ 00:22:08.460 "null", 00:22:08.460 "ffdhe2048", 00:22:08.460 "ffdhe3072", 00:22:08.460 "ffdhe4096", 00:22:08.460 "ffdhe6144", 00:22:08.460 "ffdhe8192" 00:22:08.460 ] 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "bdev_nvme_attach_controller", 00:22:08.460 "params": { 00:22:08.460 "name": "TLSTEST", 00:22:08.460 "trtype": "TCP", 00:22:08.460 "adrfam": "IPv4", 00:22:08.460 "traddr": "10.0.0.2", 00:22:08.460 "trsvcid": "4420", 00:22:08.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.460 "prchk_reftag": false, 00:22:08.460 "prchk_guard": false, 00:22:08.460 "ctrlr_loss_timeout_sec": 0, 00:22:08.460 "reconnect_delay_sec": 0, 00:22:08.460 "fast_io_fail_timeout_sec": 0, 00:22:08.460 "psk": "/tmp/tmp.6Yv0ygeGUW", 00:22:08.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.460 "hdgst": false, 00:22:08.460 "ddgst": false 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "bdev_nvme_set_hotplug", 00:22:08.460 "params": { 00:22:08.460 "period_us": 100000, 00:22:08.460 "enable": false 00:22:08.460 } 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "method": "bdev_wait_for_examine" 00:22:08.460 } 00:22:08.460 ] 00:22:08.460 }, 00:22:08.460 { 00:22:08.460 "subsystem": "nbd", 00:22:08.460 "config": [] 00:22:08.460 } 00:22:08.460 ] 00:22:08.460 }' 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 72746 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72746 ']' 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72746 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72746 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:08.460 killing process with pid 72746 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:08.460 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72746' 00:22:08.460 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.460 00:22:08.460 Latency(us) 00:22:08.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.460 =================================================================================================================== 00:22:08.460 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.461 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72746 00:22:08.461 [2024-05-15 09:15:20.818751] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.461 09:15:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72746 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 72691 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72691 ']' 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72691 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72691 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:08.718 killing process with pid 72691 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72691' 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72691 00:22:08.718 [2024-05-15 09:15:21.071241] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:08.718 [2024-05-15 09:15:21.071283] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:08.718 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72691 00:22:08.977 09:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:08.977 09:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.977 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:08.977 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.977 09:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:08.977 "subsystems": [ 00:22:08.977 { 00:22:08.977 "subsystem": "keyring", 00:22:08.977 "config": [] 00:22:08.977 }, 00:22:08.977 { 00:22:08.977 "subsystem": "iobuf", 00:22:08.977 "config": [ 00:22:08.977 { 00:22:08.977 "method": "iobuf_set_options", 00:22:08.977 "params": { 00:22:08.977 "small_pool_count": 8192, 00:22:08.977 "large_pool_count": 1024, 00:22:08.977 "small_bufsize": 8192, 00:22:08.977 "large_bufsize": 135168 00:22:08.977 } 00:22:08.977 } 00:22:08.977 ] 00:22:08.977 }, 00:22:08.977 { 00:22:08.977 "subsystem": "sock", 00:22:08.977 "config": [ 00:22:08.977 { 00:22:08.977 "method": "sock_impl_set_options", 00:22:08.977 "params": { 00:22:08.977 "impl_name": "uring", 00:22:08.977 "recv_buf_size": 2097152, 00:22:08.977 "send_buf_size": 2097152, 00:22:08.977 "enable_recv_pipe": true, 00:22:08.977 "enable_quickack": false, 00:22:08.977 "enable_placement_id": 0, 00:22:08.977 "enable_zerocopy_send_server": false, 00:22:08.977 "enable_zerocopy_send_client": false, 00:22:08.977 "zerocopy_threshold": 0, 00:22:08.977 "tls_version": 0, 00:22:08.977 "enable_ktls": false 00:22:08.977 } 00:22:08.977 }, 00:22:08.977 { 00:22:08.977 "method": "sock_impl_set_options", 00:22:08.977 "params": { 00:22:08.977 "impl_name": "posix", 00:22:08.977 "recv_buf_size": 2097152, 00:22:08.977 "send_buf_size": 2097152, 00:22:08.977 "enable_recv_pipe": true, 00:22:08.977 "enable_quickack": false, 00:22:08.977 "enable_placement_id": 0, 00:22:08.977 "enable_zerocopy_send_server": true, 00:22:08.977 "enable_zerocopy_send_client": false, 00:22:08.977 "zerocopy_threshold": 0, 00:22:08.977 "tls_version": 0, 00:22:08.977 "enable_ktls": false 00:22:08.977 } 00:22:08.977 }, 00:22:08.977 { 00:22:08.977 "method": "sock_impl_set_options", 00:22:08.977 "params": { 00:22:08.977 "impl_name": "ssl", 00:22:08.977 "recv_buf_size": 4096, 00:22:08.978 "send_buf_size": 4096, 00:22:08.978 "enable_recv_pipe": true, 00:22:08.978 "enable_quickack": false, 00:22:08.978 "enable_placement_id": 0, 00:22:08.978 "enable_zerocopy_send_server": true, 00:22:08.978 "enable_zerocopy_send_client": false, 00:22:08.978 "zerocopy_threshold": 0, 00:22:08.978 "tls_version": 0, 00:22:08.978 "enable_ktls": false 00:22:08.978 } 00:22:08.978 } 00:22:08.978 ] 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "subsystem": "vmd", 00:22:08.978 "config": [] 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "subsystem": "accel", 00:22:08.978 "config": [ 00:22:08.978 { 00:22:08.978 "method": "accel_set_options", 00:22:08.978 "params": { 00:22:08.978 "small_cache_size": 128, 00:22:08.978 "large_cache_size": 16, 00:22:08.978 "task_count": 2048, 00:22:08.978 "sequence_count": 2048, 00:22:08.978 "buf_count": 2048 00:22:08.978 } 00:22:08.978 } 00:22:08.978 ] 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "subsystem": "bdev", 00:22:08.978 "config": [ 00:22:08.978 { 00:22:08.978 "method": "bdev_set_options", 00:22:08.978 "params": { 00:22:08.978 "bdev_io_pool_size": 65535, 00:22:08.978 "bdev_io_cache_size": 256, 00:22:08.978 "bdev_auto_examine": true, 00:22:08.978 "iobuf_small_cache_size": 128, 00:22:08.978 "iobuf_large_cache_size": 16 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "bdev_raid_set_options", 00:22:08.978 "params": { 00:22:08.978 "process_window_size_kb": 1024 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "bdev_iscsi_set_options", 00:22:08.978 "params": { 00:22:08.978 "timeout_sec": 30 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "bdev_nvme_set_options", 00:22:08.978 "params": { 00:22:08.978 "action_on_timeout": "none", 00:22:08.978 "timeout_us": 0, 00:22:08.978 "timeout_admin_us": 0, 00:22:08.978 "keep_alive_timeout_ms": 10000, 00:22:08.978 "arbitration_burst": 0, 00:22:08.978 "low_priority_weight": 0, 00:22:08.978 "medium_priority_weight": 0, 00:22:08.978 "high_priority_weight": 0, 00:22:08.978 "nvme_adminq_poll_period_us": 10000, 00:22:08.978 "nvme_ioq_poll_period_us": 0, 00:22:08.978 "io_queue_requests": 0, 00:22:08.978 "delay_cmd_submit": true, 00:22:08.978 "transport_retry_count": 4, 00:22:08.978 "bdev_retry_count": 3, 00:22:08.978 "transport_ack_timeout": 0, 00:22:08.978 "ctrlr_loss_timeout_sec": 0, 00:22:08.978 "reconnect_delay_sec": 0, 00:22:08.978 "fast_io_fail_timeout_sec": 0, 00:22:08.978 "disable_auto_failback": false, 00:22:08.978 "generate_uuids": false, 00:22:08.978 "transport_tos": 0, 00:22:08.978 "nvme_error_stat": false, 00:22:08.978 "rdma_srq_size": 0, 00:22:08.978 "io_path_stat": false, 00:22:08.978 "allow_accel_sequence": false, 00:22:08.978 "rdma_max_cq_size": 0, 00:22:08.978 "rdma_cm_event_timeout_ms": 0, 00:22:08.978 "dhchap_digests": [ 00:22:08.978 "sha256", 00:22:08.978 "sha384", 00:22:08.978 "sha512" 00:22:08.978 ], 00:22:08.978 "dhchap_dhgroups": [ 00:22:08.978 "null", 00:22:08.978 "ffdhe2048", 00:22:08.978 "ffdhe3072", 00:22:08.978 "ffdhe4096", 00:22:08.978 "ffdhe6144", 00:22:08.978 "ffdhe8192" 00:22:08.978 ] 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "bdev_nvme_set_hotplug", 00:22:08.978 "params": { 00:22:08.978 "period_us": 100000, 00:22:08.978 "enable": false 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "bdev_malloc_create", 00:22:08.978 "params": { 00:22:08.978 "name": "malloc0", 00:22:08.978 "num_blocks": 8192, 00:22:08.978 "block_size": 4096, 00:22:08.978 "physical_block_size": 4096, 00:22:08.978 "uuid": "1265b21f-49f3-48f9-85a5-c5ca1289dd6d", 00:22:08.978 "optimal_io_boundary": 0 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "bdev_wait_for_examine" 00:22:08.978 } 00:22:08.978 ] 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "subsystem": "nbd", 00:22:08.978 "config": [] 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "subsystem": "scheduler", 00:22:08.978 "config": [ 00:22:08.978 { 00:22:08.978 "method": "framework_set_scheduler", 00:22:08.978 "params": { 00:22:08.978 "name": "static" 00:22:08.978 } 00:22:08.978 } 00:22:08.978 ] 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "subsystem": "nvmf", 00:22:08.978 "config": [ 00:22:08.978 { 00:22:08.978 "method": "nvmf_set_config", 00:22:08.978 "params": { 00:22:08.978 "discovery_filter": "match_any", 00:22:08.978 "admin_cmd_passthru": { 00:22:08.978 "identify_ctrlr": false 00:22:08.978 } 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_set_max_subsystems", 00:22:08.978 "params": { 00:22:08.978 "max_subsystems": 1024 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_set_crdt", 00:22:08.978 "params": { 00:22:08.978 "crdt1": 0, 00:22:08.978 "crdt2": 0, 00:22:08.978 "crdt3": 0 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_create_transport", 00:22:08.978 "params": { 00:22:08.978 "trtype": "TCP", 00:22:08.978 "max_queue_depth": 128, 00:22:08.978 "max_io_qpairs_per_ctrlr": 127, 00:22:08.978 "in_capsule_data_size": 4096, 00:22:08.978 "max_io_size": 131072, 00:22:08.978 "io_unit_size": 131072, 00:22:08.978 "max_aq_depth": 128, 00:22:08.978 "num_shared_buffers": 511, 00:22:08.978 "buf_cache_size": 4294967295, 00:22:08.978 "dif_insert_or_strip": false, 00:22:08.978 "zcopy": false, 00:22:08.978 "c2h_success": false, 00:22:08.978 "sock_priority": 0, 00:22:08.978 "abort_timeout_sec": 1, 00:22:08.978 "ack_timeout": 0, 00:22:08.978 "data_wr_pool_size": 0 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_create_subsystem", 00:22:08.978 "params": { 00:22:08.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.978 "allow_any_host": false, 00:22:08.978 "serial_number": "SPDK00000000000001", 00:22:08.978 "model_number": "SPDK bdev Controller", 00:22:08.978 "max_namespaces": 10, 00:22:08.978 "min_cntlid": 1, 00:22:08.978 "max_cntlid": 65519, 00:22:08.978 "ana_reporting": false 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_subsystem_add_host", 00:22:08.978 "params": { 00:22:08.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.978 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.978 "psk": "/tmp/tmp.6Yv0ygeGUW" 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_subsystem_add_ns", 00:22:08.978 "params": { 00:22:08.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.978 "namespace": { 00:22:08.978 "nsid": 1, 00:22:08.978 "bdev_name": "malloc0", 00:22:08.978 "nguid": "1265B21F49F348F985A5C5CA1289DD6D", 00:22:08.978 "uuid": "1265b21f-49f3-48f9-85a5-c5ca1289dd6d", 00:22:08.978 "no_auto_visible": false 00:22:08.978 } 00:22:08.978 } 00:22:08.978 }, 00:22:08.978 { 00:22:08.978 "method": "nvmf_subsystem_add_listener", 00:22:08.978 "params": { 00:22:08.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.978 "listen_address": { 00:22:08.978 "trtype": "TCP", 00:22:08.978 "adrfam": "IPv4", 00:22:08.978 "traddr": "10.0.0.2", 00:22:08.978 "trsvcid": "4420" 00:22:08.978 }, 00:22:08.978 "secure_channel": true 00:22:08.978 } 00:22:08.978 } 00:22:08.978 ] 00:22:08.978 } 00:22:08.978 ] 00:22:08.978 }' 00:22:08.978 09:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72789 00:22:08.978 09:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72789 00:22:08.978 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72789 ']' 00:22:08.978 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.978 09:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:08.978 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.979 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.979 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.979 09:15:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.979 [2024-05-15 09:15:21.357400] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:08.979 [2024-05-15 09:15:21.357487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.237 [2024-05-15 09:15:21.495852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.237 [2024-05-15 09:15:21.595139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.237 [2024-05-15 09:15:21.595195] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.237 [2024-05-15 09:15:21.595206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.237 [2024-05-15 09:15:21.595216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.237 [2024-05-15 09:15:21.595224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.237 [2024-05-15 09:15:21.595318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.495 [2024-05-15 09:15:21.806907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.495 [2024-05-15 09:15:21.822812] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:09.495 [2024-05-15 09:15:21.838788] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:09.495 [2024-05-15 09:15:21.838858] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.495 [2024-05-15 09:15:21.839034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=72821 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 72821 /var/tmp/bdevperf.sock 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72821 ']' 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:10.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:10.062 09:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:10.062 "subsystems": [ 00:22:10.062 { 00:22:10.062 "subsystem": "keyring", 00:22:10.062 "config": [] 00:22:10.062 }, 00:22:10.062 { 00:22:10.062 "subsystem": "iobuf", 00:22:10.062 "config": [ 00:22:10.062 { 00:22:10.062 "method": "iobuf_set_options", 00:22:10.062 "params": { 00:22:10.062 "small_pool_count": 8192, 00:22:10.062 "large_pool_count": 1024, 00:22:10.063 "small_bufsize": 8192, 00:22:10.063 "large_bufsize": 135168 00:22:10.063 } 00:22:10.063 } 00:22:10.063 ] 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "subsystem": "sock", 00:22:10.063 "config": [ 00:22:10.063 { 00:22:10.063 "method": "sock_impl_set_options", 00:22:10.063 "params": { 00:22:10.063 "impl_name": "uring", 00:22:10.063 "recv_buf_size": 2097152, 00:22:10.063 "send_buf_size": 2097152, 00:22:10.063 "enable_recv_pipe": true, 00:22:10.063 "enable_quickack": false, 00:22:10.063 "enable_placement_id": 0, 00:22:10.063 "enable_zerocopy_send_server": false, 00:22:10.063 "enable_zerocopy_send_client": false, 00:22:10.063 "zerocopy_threshold": 0, 00:22:10.063 "tls_version": 0, 00:22:10.063 "enable_ktls": false 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "sock_impl_set_options", 00:22:10.063 "params": { 00:22:10.063 "impl_name": "posix", 00:22:10.063 "recv_buf_size": 2097152, 00:22:10.063 "send_buf_size": 2097152, 00:22:10.063 "enable_recv_pipe": true, 00:22:10.063 "enable_quickack": false, 00:22:10.063 "enable_placement_id": 0, 00:22:10.063 "enable_zerocopy_send_server": true, 00:22:10.063 "enable_zerocopy_send_client": false, 00:22:10.063 "zerocopy_threshold": 0, 00:22:10.063 "tls_version": 0, 00:22:10.063 "enable_ktls": false 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "sock_impl_set_options", 00:22:10.063 "params": { 00:22:10.063 "impl_name": "ssl", 00:22:10.063 "recv_buf_size": 4096, 00:22:10.063 "send_buf_size": 4096, 00:22:10.063 "enable_recv_pipe": true, 00:22:10.063 "enable_quickack": false, 00:22:10.063 "enable_placement_id": 0, 00:22:10.063 "enable_zerocopy_send_server": true, 00:22:10.063 "enable_zerocopy_send_client": false, 00:22:10.063 "zerocopy_threshold": 0, 00:22:10.063 "tls_version": 0, 00:22:10.063 "enable_ktls": false 00:22:10.063 } 00:22:10.063 } 00:22:10.063 ] 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "subsystem": "vmd", 00:22:10.063 "config": [] 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "subsystem": "accel", 00:22:10.063 "config": [ 00:22:10.063 { 00:22:10.063 "method": "accel_set_options", 00:22:10.063 "params": { 00:22:10.063 "small_cache_size": 128, 00:22:10.063 "large_cache_size": 16, 00:22:10.063 "task_count": 2048, 00:22:10.063 "sequence_count": 2048, 00:22:10.063 "buf_count": 2048 00:22:10.063 } 00:22:10.063 } 00:22:10.063 ] 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "subsystem": "bdev", 00:22:10.063 "config": [ 00:22:10.063 { 00:22:10.063 "method": "bdev_set_options", 00:22:10.063 "params": { 00:22:10.063 "bdev_io_pool_size": 65535, 00:22:10.063 "bdev_io_cache_size": 256, 00:22:10.063 "bdev_auto_examine": true, 00:22:10.063 "iobuf_small_cache_size": 128, 00:22:10.063 "iobuf_large_cache_size": 16 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "bdev_raid_set_options", 00:22:10.063 "params": { 00:22:10.063 "process_window_size_kb": 1024 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "bdev_iscsi_set_options", 00:22:10.063 "params": { 00:22:10.063 "timeout_sec": 30 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "bdev_nvme_set_options", 00:22:10.063 "params": { 00:22:10.063 "action_on_timeout": "none", 00:22:10.063 "timeout_us": 0, 00:22:10.063 "timeout_admin_us": 0, 00:22:10.063 "keep_alive_timeout_ms": 10000, 00:22:10.063 "arbitration_burst": 0, 00:22:10.063 "low_priority_weight": 0, 00:22:10.063 "medium_priority_weight": 0, 00:22:10.063 "high_priority_weight": 0, 00:22:10.063 "nvme_adminq_poll_period_us": 10000, 00:22:10.063 "nvme_ioq_poll_period_us": 0, 00:22:10.063 "io_queue_requests": 512, 00:22:10.063 "delay_cmd_submit": true, 00:22:10.063 "transport_retry_count": 4, 00:22:10.063 "bdev_retry_count": 3, 00:22:10.063 "transport_ack_timeout": 0, 00:22:10.063 "ctrlr_loss_timeout_sec": 0, 00:22:10.063 "reconnect_delay_sec": 0, 00:22:10.063 "fast_io_fail_timeout_sec": 0, 00:22:10.063 "disable_auto_failback": false, 00:22:10.063 "generate_uuids": false, 00:22:10.063 "transport_tos": 0, 00:22:10.063 "nvme_error_stat": false, 00:22:10.063 "rdma_srq_size": 0, 00:22:10.063 "io_path_stat": false, 00:22:10.063 "allow_accel_sequence": false, 00:22:10.063 "rdma_max_cq_size": 0, 00:22:10.063 "rdma_cm_event_timeout_ms": 0, 00:22:10.063 "dhchap_digests": [ 00:22:10.063 "sha256", 00:22:10.063 "sha384", 00:22:10.063 "sha512" 00:22:10.063 ], 00:22:10.063 "dhchap_dhgroups": [ 00:22:10.063 "null", 00:22:10.063 "ffdhe2048", 00:22:10.063 "ffdhe3072", 00:22:10.063 "ffdhe4096", 00:22:10.063 "ffdhe6144", 00:22:10.063 "ffdhe8192" 00:22:10.063 ] 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "bdev_nvme_attach_controller", 00:22:10.063 "params": { 00:22:10.063 "name": "TLSTEST", 00:22:10.063 "trtype": "TCP", 00:22:10.063 "adrfam": "IPv4", 00:22:10.063 "traddr": "10.0.0.2", 00:22:10.063 "trsvcid": "4420", 00:22:10.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.063 "prchk_reftag": false, 00:22:10.063 "prchk_guard": false, 00:22:10.063 "ctrlr_loss_timeout_sec": 0, 00:22:10.063 "reconnect_delay_sec": 0, 00:22:10.063 "fast_io_fail_timeout_sec": 0, 00:22:10.063 "psk": "/tmp/tmp.6Yv0ygeGUW", 00:22:10.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.063 "hdgst": false, 00:22:10.063 "ddgst": false 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "bdev_nvme_set_hotplug", 00:22:10.063 "params": { 00:22:10.063 "period_us": 100000, 00:22:10.063 "enable": false 00:22:10.063 } 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "method": "bdev_wait_for_examine" 00:22:10.063 } 00:22:10.063 ] 00:22:10.063 }, 00:22:10.063 { 00:22:10.063 "subsystem": "nbd", 00:22:10.063 "config": [] 00:22:10.063 } 00:22:10.063 ] 00:22:10.063 }' 00:22:10.063 [2024-05-15 09:15:22.401756] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:10.063 [2024-05-15 09:15:22.402118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72821 ] 00:22:10.331 [2024-05-15 09:15:22.547513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.331 [2024-05-15 09:15:22.665566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.602 [2024-05-15 09:15:22.819825] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.602 [2024-05-15 09:15:22.819950] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:11.165 09:15:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:11.165 09:15:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:11.165 09:15:23 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.165 Running I/O for 10 seconds... 00:22:21.149 00:22:21.149 Latency(us) 00:22:21.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.149 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.149 Verification LBA range: start 0x0 length 0x2000 00:22:21.149 TLSTESTn1 : 10.01 5439.22 21.25 0.00 0.00 23493.84 3885.35 26089.57 00:22:21.149 =================================================================================================================== 00:22:21.149 Total : 5439.22 21.25 0.00 0.00 23493.84 3885.35 26089.57 00:22:21.149 0 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 72821 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72821 ']' 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72821 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72821 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:21.149 killing process with pid 72821 00:22:21.149 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.149 00:22:21.149 Latency(us) 00:22:21.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.149 =================================================================================================================== 00:22:21.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72821' 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72821 00:22:21.149 [2024-05-15 09:15:33.520101] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.149 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72821 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 72789 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72789 ']' 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72789 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72789 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:21.410 killing process with pid 72789 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72789' 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72789 00:22:21.410 [2024-05-15 09:15:33.773091] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:21.410 [2024-05-15 09:15:33.773138] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:21.410 09:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72789 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72955 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72955 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 72955 ']' 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:21.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:21.673 09:15:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.673 [2024-05-15 09:15:34.074949] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:21.673 [2024-05-15 09:15:34.075053] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.939 [2024-05-15 09:15:34.219352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.939 [2024-05-15 09:15:34.321536] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.939 [2024-05-15 09:15:34.321583] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.939 [2024-05-15 09:15:34.321595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.939 [2024-05-15 09:15:34.321605] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.939 [2024-05-15 09:15:34.321613] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.939 [2024-05-15 09:15:34.321638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.6Yv0ygeGUW 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6Yv0ygeGUW 00:22:22.885 09:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.885 [2024-05-15 09:15:35.313764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.142 09:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.142 09:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.400 [2024-05-15 09:15:35.813836] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:23.400 [2024-05-15 09:15:35.813935] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.400 [2024-05-15 09:15:35.814109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.400 09:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.966 malloc0 00:22:23.966 09:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.966 09:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Yv0ygeGUW 00:22:24.224 [2024-05-15 09:15:36.591299] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73013 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73013 /var/tmp/bdevperf.sock 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 73013 ']' 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:24.224 09:15:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.224 [2024-05-15 09:15:36.666728] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:24.224 [2024-05-15 09:15:36.666829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:22:24.481 [2024-05-15 09:15:36.812787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.481 [2024-05-15 09:15:36.913070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.415 09:15:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:25.415 09:15:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:25.415 09:15:37 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6Yv0ygeGUW 00:22:25.672 09:15:37 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:25.934 [2024-05-15 09:15:38.125163] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.934 nvme0n1 00:22:25.934 09:15:38 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:25.934 Running I/O for 1 seconds... 00:22:27.311 00:22:27.311 Latency(us) 00:22:27.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:27.311 Verification LBA range: start 0x0 length 0x2000 00:22:27.311 nvme0n1 : 1.01 5452.06 21.30 0.00 0.00 23302.78 4649.94 18724.57 00:22:27.311 =================================================================================================================== 00:22:27.311 Total : 5452.06 21.30 0.00 0.00 23302.78 4649.94 18724.57 00:22:27.311 0 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73013 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 73013 ']' 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 73013 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73013 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:27.311 killing process with pid 73013 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73013' 00:22:27.311 Received shutdown signal, test time was about 1.000000 seconds 00:22:27.311 00:22:27.311 Latency(us) 00:22:27.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.311 =================================================================================================================== 00:22:27.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 73013 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 73013 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 72955 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 72955 ']' 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 72955 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72955 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:27.311 killing process with pid 72955 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72955' 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 72955 00:22:27.311 [2024-05-15 09:15:39.623018] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:27.311 [2024-05-15 09:15:39.623066] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:27.311 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 72955 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73060 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73060 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 73060 ']' 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:27.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:27.570 09:15:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.570 [2024-05-15 09:15:39.915853] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:27.570 [2024-05-15 09:15:39.915965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.827 [2024-05-15 09:15:40.060036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.827 [2024-05-15 09:15:40.159472] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.827 [2024-05-15 09:15:40.159522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.827 [2024-05-15 09:15:40.159533] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.827 [2024-05-15 09:15:40.159554] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.827 [2024-05-15 09:15:40.159562] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.827 [2024-05-15 09:15:40.159589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.406 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:28.406 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:28.406 09:15:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.406 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:28.406 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.710 [2024-05-15 09:15:40.890506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.710 malloc0 00:22:28.710 [2024-05-15 09:15:40.919807] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:28.710 [2024-05-15 09:15:40.919884] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.710 [2024-05-15 09:15:40.920061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=73092 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 73092 /var/tmp/bdevperf.sock 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 73092 ']' 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:28.710 09:15:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.710 [2024-05-15 09:15:40.990526] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:28.710 [2024-05-15 09:15:40.990654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73092 ] 00:22:28.710 [2024-05-15 09:15:41.130459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.968 [2024-05-15 09:15:41.262682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.535 09:15:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:29.535 09:15:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:29.535 09:15:41 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6Yv0ygeGUW 00:22:29.793 09:15:42 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:30.050 [2024-05-15 09:15:42.369945] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.050 nvme0n1 00:22:30.050 09:15:42 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.309 Running I/O for 1 seconds... 00:22:31.268 00:22:31.268 Latency(us) 00:22:31.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.268 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:31.268 Verification LBA range: start 0x0 length 0x2000 00:22:31.268 nvme0n1 : 1.01 5045.82 19.71 0.00 0.00 25132.32 5648.58 21720.50 00:22:31.268 =================================================================================================================== 00:22:31.268 Total : 5045.82 19.71 0.00 0.00 25132.32 5648.58 21720.50 00:22:31.268 0 00:22:31.268 09:15:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:31.268 09:15:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.268 09:15:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.525 09:15:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.525 09:15:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:31.525 "subsystems": [ 00:22:31.525 { 00:22:31.525 "subsystem": "keyring", 00:22:31.525 "config": [ 00:22:31.525 { 00:22:31.525 "method": "keyring_file_add_key", 00:22:31.525 "params": { 00:22:31.525 "name": "key0", 00:22:31.525 "path": "/tmp/tmp.6Yv0ygeGUW" 00:22:31.525 } 00:22:31.525 } 00:22:31.525 ] 00:22:31.525 }, 00:22:31.525 { 00:22:31.526 "subsystem": "iobuf", 00:22:31.526 "config": [ 00:22:31.526 { 00:22:31.526 "method": "iobuf_set_options", 00:22:31.526 "params": { 00:22:31.526 "small_pool_count": 8192, 00:22:31.526 "large_pool_count": 1024, 00:22:31.526 "small_bufsize": 8192, 00:22:31.526 "large_bufsize": 135168 00:22:31.526 } 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "sock", 00:22:31.526 "config": [ 00:22:31.526 { 00:22:31.526 "method": "sock_impl_set_options", 00:22:31.526 "params": { 00:22:31.526 "impl_name": "uring", 00:22:31.526 "recv_buf_size": 2097152, 00:22:31.526 "send_buf_size": 2097152, 00:22:31.526 "enable_recv_pipe": true, 00:22:31.526 "enable_quickack": false, 00:22:31.526 "enable_placement_id": 0, 00:22:31.526 "enable_zerocopy_send_server": false, 00:22:31.526 "enable_zerocopy_send_client": false, 00:22:31.526 "zerocopy_threshold": 0, 00:22:31.526 "tls_version": 0, 00:22:31.526 "enable_ktls": false 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "sock_impl_set_options", 00:22:31.526 "params": { 00:22:31.526 "impl_name": "posix", 00:22:31.526 "recv_buf_size": 2097152, 00:22:31.526 "send_buf_size": 2097152, 00:22:31.526 "enable_recv_pipe": true, 00:22:31.526 "enable_quickack": false, 00:22:31.526 "enable_placement_id": 0, 00:22:31.526 "enable_zerocopy_send_server": true, 00:22:31.526 "enable_zerocopy_send_client": false, 00:22:31.526 "zerocopy_threshold": 0, 00:22:31.526 "tls_version": 0, 00:22:31.526 "enable_ktls": false 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "sock_impl_set_options", 00:22:31.526 "params": { 00:22:31.526 "impl_name": "ssl", 00:22:31.526 "recv_buf_size": 4096, 00:22:31.526 "send_buf_size": 4096, 00:22:31.526 "enable_recv_pipe": true, 00:22:31.526 "enable_quickack": false, 00:22:31.526 "enable_placement_id": 0, 00:22:31.526 "enable_zerocopy_send_server": true, 00:22:31.526 "enable_zerocopy_send_client": false, 00:22:31.526 "zerocopy_threshold": 0, 00:22:31.526 "tls_version": 0, 00:22:31.526 "enable_ktls": false 00:22:31.526 } 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "vmd", 00:22:31.526 "config": [] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "accel", 00:22:31.526 "config": [ 00:22:31.526 { 00:22:31.526 "method": "accel_set_options", 00:22:31.526 "params": { 00:22:31.526 "small_cache_size": 128, 00:22:31.526 "large_cache_size": 16, 00:22:31.526 "task_count": 2048, 00:22:31.526 "sequence_count": 2048, 00:22:31.526 "buf_count": 2048 00:22:31.526 } 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "bdev", 00:22:31.526 "config": [ 00:22:31.526 { 00:22:31.526 "method": "bdev_set_options", 00:22:31.526 "params": { 00:22:31.526 "bdev_io_pool_size": 65535, 00:22:31.526 "bdev_io_cache_size": 256, 00:22:31.526 "bdev_auto_examine": true, 00:22:31.526 "iobuf_small_cache_size": 128, 00:22:31.526 "iobuf_large_cache_size": 16 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "bdev_raid_set_options", 00:22:31.526 "params": { 00:22:31.526 "process_window_size_kb": 1024 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "bdev_iscsi_set_options", 00:22:31.526 "params": { 00:22:31.526 "timeout_sec": 30 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "bdev_nvme_set_options", 00:22:31.526 "params": { 00:22:31.526 "action_on_timeout": "none", 00:22:31.526 "timeout_us": 0, 00:22:31.526 "timeout_admin_us": 0, 00:22:31.526 "keep_alive_timeout_ms": 10000, 00:22:31.526 "arbitration_burst": 0, 00:22:31.526 "low_priority_weight": 0, 00:22:31.526 "medium_priority_weight": 0, 00:22:31.526 "high_priority_weight": 0, 00:22:31.526 "nvme_adminq_poll_period_us": 10000, 00:22:31.526 "nvme_ioq_poll_period_us": 0, 00:22:31.526 "io_queue_requests": 0, 00:22:31.526 "delay_cmd_submit": true, 00:22:31.526 "transport_retry_count": 4, 00:22:31.526 "bdev_retry_count": 3, 00:22:31.526 "transport_ack_timeout": 0, 00:22:31.526 "ctrlr_loss_timeout_sec": 0, 00:22:31.526 "reconnect_delay_sec": 0, 00:22:31.526 "fast_io_fail_timeout_sec": 0, 00:22:31.526 "disable_auto_failback": false, 00:22:31.526 "generate_uuids": false, 00:22:31.526 "transport_tos": 0, 00:22:31.526 "nvme_error_stat": false, 00:22:31.526 "rdma_srq_size": 0, 00:22:31.526 "io_path_stat": false, 00:22:31.526 "allow_accel_sequence": false, 00:22:31.526 "rdma_max_cq_size": 0, 00:22:31.526 "rdma_cm_event_timeout_ms": 0, 00:22:31.526 "dhchap_digests": [ 00:22:31.526 "sha256", 00:22:31.526 "sha384", 00:22:31.526 "sha512" 00:22:31.526 ], 00:22:31.526 "dhchap_dhgroups": [ 00:22:31.526 "null", 00:22:31.526 "ffdhe2048", 00:22:31.526 "ffdhe3072", 00:22:31.526 "ffdhe4096", 00:22:31.526 "ffdhe6144", 00:22:31.526 "ffdhe8192" 00:22:31.526 ] 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "bdev_nvme_set_hotplug", 00:22:31.526 "params": { 00:22:31.526 "period_us": 100000, 00:22:31.526 "enable": false 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "bdev_malloc_create", 00:22:31.526 "params": { 00:22:31.526 "name": "malloc0", 00:22:31.526 "num_blocks": 8192, 00:22:31.526 "block_size": 4096, 00:22:31.526 "physical_block_size": 4096, 00:22:31.526 "uuid": "9500e5a1-a887-4052-b8c4-c880104719f2", 00:22:31.526 "optimal_io_boundary": 0 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "bdev_wait_for_examine" 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "nbd", 00:22:31.526 "config": [] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "scheduler", 00:22:31.526 "config": [ 00:22:31.526 { 00:22:31.526 "method": "framework_set_scheduler", 00:22:31.526 "params": { 00:22:31.526 "name": "static" 00:22:31.526 } 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "subsystem": "nvmf", 00:22:31.526 "config": [ 00:22:31.526 { 00:22:31.526 "method": "nvmf_set_config", 00:22:31.526 "params": { 00:22:31.526 "discovery_filter": "match_any", 00:22:31.526 "admin_cmd_passthru": { 00:22:31.526 "identify_ctrlr": false 00:22:31.526 } 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_set_max_subsystems", 00:22:31.526 "params": { 00:22:31.526 "max_subsystems": 1024 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_set_crdt", 00:22:31.526 "params": { 00:22:31.526 "crdt1": 0, 00:22:31.526 "crdt2": 0, 00:22:31.526 "crdt3": 0 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_create_transport", 00:22:31.526 "params": { 00:22:31.526 "trtype": "TCP", 00:22:31.526 "max_queue_depth": 128, 00:22:31.526 "max_io_qpairs_per_ctrlr": 127, 00:22:31.526 "in_capsule_data_size": 4096, 00:22:31.526 "max_io_size": 131072, 00:22:31.526 "io_unit_size": 131072, 00:22:31.526 "max_aq_depth": 128, 00:22:31.526 "num_shared_buffers": 511, 00:22:31.526 "buf_cache_size": 4294967295, 00:22:31.526 "dif_insert_or_strip": false, 00:22:31.526 "zcopy": false, 00:22:31.526 "c2h_success": false, 00:22:31.526 "sock_priority": 0, 00:22:31.526 "abort_timeout_sec": 1, 00:22:31.526 "ack_timeout": 0, 00:22:31.526 "data_wr_pool_size": 0 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_create_subsystem", 00:22:31.526 "params": { 00:22:31.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.526 "allow_any_host": false, 00:22:31.526 "serial_number": "00000000000000000000", 00:22:31.526 "model_number": "SPDK bdev Controller", 00:22:31.526 "max_namespaces": 32, 00:22:31.526 "min_cntlid": 1, 00:22:31.526 "max_cntlid": 65519, 00:22:31.526 "ana_reporting": false 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_subsystem_add_host", 00:22:31.526 "params": { 00:22:31.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.526 "host": "nqn.2016-06.io.spdk:host1", 00:22:31.526 "psk": "key0" 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_subsystem_add_ns", 00:22:31.526 "params": { 00:22:31.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.526 "namespace": { 00:22:31.526 "nsid": 1, 00:22:31.526 "bdev_name": "malloc0", 00:22:31.526 "nguid": "9500E5A1A8874052B8C4C880104719F2", 00:22:31.526 "uuid": "9500e5a1-a887-4052-b8c4-c880104719f2", 00:22:31.526 "no_auto_visible": false 00:22:31.526 } 00:22:31.526 } 00:22:31.526 }, 00:22:31.526 { 00:22:31.526 "method": "nvmf_subsystem_add_listener", 00:22:31.526 "params": { 00:22:31.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.526 "listen_address": { 00:22:31.526 "trtype": "TCP", 00:22:31.526 "adrfam": "IPv4", 00:22:31.526 "traddr": "10.0.0.2", 00:22:31.526 "trsvcid": "4420" 00:22:31.526 }, 00:22:31.526 "secure_channel": true 00:22:31.526 } 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 } 00:22:31.526 ] 00:22:31.526 }' 00:22:31.526 09:15:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:31.784 09:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:31.784 "subsystems": [ 00:22:31.784 { 00:22:31.784 "subsystem": "keyring", 00:22:31.784 "config": [ 00:22:31.784 { 00:22:31.784 "method": "keyring_file_add_key", 00:22:31.784 "params": { 00:22:31.784 "name": "key0", 00:22:31.784 "path": "/tmp/tmp.6Yv0ygeGUW" 00:22:31.784 } 00:22:31.784 } 00:22:31.784 ] 00:22:31.784 }, 00:22:31.784 { 00:22:31.784 "subsystem": "iobuf", 00:22:31.784 "config": [ 00:22:31.784 { 00:22:31.784 "method": "iobuf_set_options", 00:22:31.784 "params": { 00:22:31.784 "small_pool_count": 8192, 00:22:31.784 "large_pool_count": 1024, 00:22:31.784 "small_bufsize": 8192, 00:22:31.784 "large_bufsize": 135168 00:22:31.784 } 00:22:31.784 } 00:22:31.784 ] 00:22:31.784 }, 00:22:31.784 { 00:22:31.784 "subsystem": "sock", 00:22:31.784 "config": [ 00:22:31.784 { 00:22:31.784 "method": "sock_impl_set_options", 00:22:31.784 "params": { 00:22:31.784 "impl_name": "uring", 00:22:31.784 "recv_buf_size": 2097152, 00:22:31.784 "send_buf_size": 2097152, 00:22:31.784 "enable_recv_pipe": true, 00:22:31.784 "enable_quickack": false, 00:22:31.784 "enable_placement_id": 0, 00:22:31.784 "enable_zerocopy_send_server": false, 00:22:31.784 "enable_zerocopy_send_client": false, 00:22:31.784 "zerocopy_threshold": 0, 00:22:31.784 "tls_version": 0, 00:22:31.784 "enable_ktls": false 00:22:31.784 } 00:22:31.784 }, 00:22:31.784 { 00:22:31.784 "method": "sock_impl_set_options", 00:22:31.784 "params": { 00:22:31.784 "impl_name": "posix", 00:22:31.784 "recv_buf_size": 2097152, 00:22:31.784 "send_buf_size": 2097152, 00:22:31.784 "enable_recv_pipe": true, 00:22:31.784 "enable_quickack": false, 00:22:31.784 "enable_placement_id": 0, 00:22:31.784 "enable_zerocopy_send_server": true, 00:22:31.784 "enable_zerocopy_send_client": false, 00:22:31.784 "zerocopy_threshold": 0, 00:22:31.784 "tls_version": 0, 00:22:31.784 "enable_ktls": false 00:22:31.784 } 00:22:31.784 }, 00:22:31.784 { 00:22:31.784 "method": "sock_impl_set_options", 00:22:31.784 "params": { 00:22:31.784 "impl_name": "ssl", 00:22:31.784 "recv_buf_size": 4096, 00:22:31.784 "send_buf_size": 4096, 00:22:31.784 "enable_recv_pipe": true, 00:22:31.784 "enable_quickack": false, 00:22:31.784 "enable_placement_id": 0, 00:22:31.784 "enable_zerocopy_send_server": true, 00:22:31.784 "enable_zerocopy_send_client": false, 00:22:31.784 "zerocopy_threshold": 0, 00:22:31.784 "tls_version": 0, 00:22:31.784 "enable_ktls": false 00:22:31.784 } 00:22:31.784 } 00:22:31.785 ] 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "subsystem": "vmd", 00:22:31.785 "config": [] 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "subsystem": "accel", 00:22:31.785 "config": [ 00:22:31.785 { 00:22:31.785 "method": "accel_set_options", 00:22:31.785 "params": { 00:22:31.785 "small_cache_size": 128, 00:22:31.785 "large_cache_size": 16, 00:22:31.785 "task_count": 2048, 00:22:31.785 "sequence_count": 2048, 00:22:31.785 "buf_count": 2048 00:22:31.785 } 00:22:31.785 } 00:22:31.785 ] 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "subsystem": "bdev", 00:22:31.785 "config": [ 00:22:31.785 { 00:22:31.785 "method": "bdev_set_options", 00:22:31.785 "params": { 00:22:31.785 "bdev_io_pool_size": 65535, 00:22:31.785 "bdev_io_cache_size": 256, 00:22:31.785 "bdev_auto_examine": true, 00:22:31.785 "iobuf_small_cache_size": 128, 00:22:31.785 "iobuf_large_cache_size": 16 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_raid_set_options", 00:22:31.785 "params": { 00:22:31.785 "process_window_size_kb": 1024 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_iscsi_set_options", 00:22:31.785 "params": { 00:22:31.785 "timeout_sec": 30 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_nvme_set_options", 00:22:31.785 "params": { 00:22:31.785 "action_on_timeout": "none", 00:22:31.785 "timeout_us": 0, 00:22:31.785 "timeout_admin_us": 0, 00:22:31.785 "keep_alive_timeout_ms": 10000, 00:22:31.785 "arbitration_burst": 0, 00:22:31.785 "low_priority_weight": 0, 00:22:31.785 "medium_priority_weight": 0, 00:22:31.785 "high_priority_weight": 0, 00:22:31.785 "nvme_adminq_poll_period_us": 10000, 00:22:31.785 "nvme_ioq_poll_period_us": 0, 00:22:31.785 "io_queue_requests": 512, 00:22:31.785 "delay_cmd_submit": true, 00:22:31.785 "transport_retry_count": 4, 00:22:31.785 "bdev_retry_count": 3, 00:22:31.785 "transport_ack_timeout": 0, 00:22:31.785 "ctrlr_loss_timeout_sec": 0, 00:22:31.785 "reconnect_delay_sec": 0, 00:22:31.785 "fast_io_fail_timeout_sec": 0, 00:22:31.785 "disable_auto_failback": false, 00:22:31.785 "generate_uuids": false, 00:22:31.785 "transport_tos": 0, 00:22:31.785 "nvme_error_stat": false, 00:22:31.785 "rdma_srq_size": 0, 00:22:31.785 "io_path_stat": false, 00:22:31.785 "allow_accel_sequence": false, 00:22:31.785 "rdma_max_cq_size": 0, 00:22:31.785 "rdma_cm_event_timeout_ms": 0, 00:22:31.785 "dhchap_digests": [ 00:22:31.785 "sha256", 00:22:31.785 "sha384", 00:22:31.785 "sha512" 00:22:31.785 ], 00:22:31.785 "dhchap_dhgroups": [ 00:22:31.785 "null", 00:22:31.785 "ffdhe2048", 00:22:31.785 "ffdhe3072", 00:22:31.785 "ffdhe4096", 00:22:31.785 "ffdhe6144", 00:22:31.785 "ffdhe8192" 00:22:31.785 ] 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_nvme_attach_controller", 00:22:31.785 "params": { 00:22:31.785 "name": "nvme0", 00:22:31.785 "trtype": "TCP", 00:22:31.785 "adrfam": "IPv4", 00:22:31.785 "traddr": "10.0.0.2", 00:22:31.785 "trsvcid": "4420", 00:22:31.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.785 "prchk_reftag": false, 00:22:31.785 "prchk_guard": false, 00:22:31.785 "ctrlr_loss_timeout_sec": 0, 00:22:31.785 "reconnect_delay_sec": 0, 00:22:31.785 "fast_io_fail_timeout_sec": 0, 00:22:31.785 "psk": "key0", 00:22:31.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.785 "hdgst": false, 00:22:31.785 "ddgst": false 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_nvme_set_hotplug", 00:22:31.785 "params": { 00:22:31.785 "period_us": 100000, 00:22:31.785 "enable": false 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_enable_histogram", 00:22:31.785 "params": { 00:22:31.785 "name": "nvme0n1", 00:22:31.785 "enable": true 00:22:31.785 } 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "method": "bdev_wait_for_examine" 00:22:31.785 } 00:22:31.785 ] 00:22:31.785 }, 00:22:31.785 { 00:22:31.785 "subsystem": "nbd", 00:22:31.785 "config": [] 00:22:31.785 } 00:22:31.785 ] 00:22:31.785 }' 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 73092 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 73092 ']' 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 73092 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73092 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:31.785 killing process with pid 73092 00:22:31.785 Received shutdown signal, test time was about 1.000000 seconds 00:22:31.785 00:22:31.785 Latency(us) 00:22:31.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.785 =================================================================================================================== 00:22:31.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73092' 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 73092 00:22:31.785 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 73092 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 73060 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 73060 ']' 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 73060 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73060 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:32.043 killing process with pid 73060 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73060' 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 73060 00:22:32.043 [2024-05-15 09:15:44.349932] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:32.043 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 73060 00:22:32.301 09:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:32.301 09:15:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.301 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:32.301 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.301 09:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:32.301 "subsystems": [ 00:22:32.301 { 00:22:32.301 "subsystem": "keyring", 00:22:32.301 "config": [ 00:22:32.301 { 00:22:32.301 "method": "keyring_file_add_key", 00:22:32.301 "params": { 00:22:32.301 "name": "key0", 00:22:32.301 "path": "/tmp/tmp.6Yv0ygeGUW" 00:22:32.301 } 00:22:32.301 } 00:22:32.301 ] 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "subsystem": "iobuf", 00:22:32.301 "config": [ 00:22:32.301 { 00:22:32.301 "method": "iobuf_set_options", 00:22:32.301 "params": { 00:22:32.301 "small_pool_count": 8192, 00:22:32.301 "large_pool_count": 1024, 00:22:32.301 "small_bufsize": 8192, 00:22:32.301 "large_bufsize": 135168 00:22:32.301 } 00:22:32.301 } 00:22:32.301 ] 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "subsystem": "sock", 00:22:32.301 "config": [ 00:22:32.301 { 00:22:32.301 "method": "sock_impl_set_options", 00:22:32.301 "params": { 00:22:32.301 "impl_name": "uring", 00:22:32.301 "recv_buf_size": 2097152, 00:22:32.301 "send_buf_size": 2097152, 00:22:32.301 "enable_recv_pipe": true, 00:22:32.301 "enable_quickack": false, 00:22:32.301 "enable_placement_id": 0, 00:22:32.301 "enable_zerocopy_send_server": false, 00:22:32.301 "enable_zerocopy_send_client": false, 00:22:32.301 "zerocopy_threshold": 0, 00:22:32.301 "tls_version": 0, 00:22:32.301 "enable_ktls": false 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "sock_impl_set_options", 00:22:32.301 "params": { 00:22:32.301 "impl_name": "posix", 00:22:32.301 "recv_buf_size": 2097152, 00:22:32.301 "send_buf_size": 2097152, 00:22:32.301 "enable_recv_pipe": true, 00:22:32.301 "enable_quickack": false, 00:22:32.301 "enable_placement_id": 0, 00:22:32.301 "enable_zerocopy_send_server": true, 00:22:32.301 "enable_zerocopy_send_client": false, 00:22:32.301 "zerocopy_threshold": 0, 00:22:32.301 "tls_version": 0, 00:22:32.301 "enable_ktls": false 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "sock_impl_set_options", 00:22:32.301 "params": { 00:22:32.301 "impl_name": "ssl", 00:22:32.301 "recv_buf_size": 4096, 00:22:32.301 "send_buf_size": 4096, 00:22:32.301 "enable_recv_pipe": true, 00:22:32.301 "enable_quickack": false, 00:22:32.301 "enable_placement_id": 0, 00:22:32.301 "enable_zerocopy_send_server": true, 00:22:32.301 "enable_zerocopy_send_client": false, 00:22:32.301 "zerocopy_threshold": 0, 00:22:32.301 "tls_version": 0, 00:22:32.301 "enable_ktls": false 00:22:32.301 } 00:22:32.301 } 00:22:32.301 ] 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "subsystem": "vmd", 00:22:32.301 "config": [] 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "subsystem": "accel", 00:22:32.301 "config": [ 00:22:32.301 { 00:22:32.301 "method": "accel_set_options", 00:22:32.301 "params": { 00:22:32.301 "small_cache_size": 128, 00:22:32.301 "large_cache_size": 16, 00:22:32.301 "task_count": 2048, 00:22:32.301 "sequence_count": 2048, 00:22:32.301 "buf_count": 2048 00:22:32.301 } 00:22:32.301 } 00:22:32.301 ] 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "subsystem": "bdev", 00:22:32.301 "config": [ 00:22:32.301 { 00:22:32.301 "method": "bdev_set_options", 00:22:32.301 "params": { 00:22:32.301 "bdev_io_pool_size": 65535, 00:22:32.301 "bdev_io_cache_size": 256, 00:22:32.301 "bdev_auto_examine": true, 00:22:32.301 "iobuf_small_cache_size": 128, 00:22:32.301 "iobuf_large_cache_size": 16 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "bdev_raid_set_options", 00:22:32.301 "params": { 00:22:32.301 "process_window_size_kb": 1024 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "bdev_iscsi_set_options", 00:22:32.301 "params": { 00:22:32.301 "timeout_sec": 30 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "bdev_nvme_set_options", 00:22:32.301 "params": { 00:22:32.301 "action_on_timeout": "none", 00:22:32.301 "timeout_us": 0, 00:22:32.301 "timeout_admin_us": 0, 00:22:32.301 "keep_alive_timeout_ms": 10000, 00:22:32.301 "arbitration_burst": 0, 00:22:32.301 "low_priority_weight": 0, 00:22:32.301 "medium_priority_weight": 0, 00:22:32.301 "high_priority_weight": 0, 00:22:32.301 "nvme_adminq_poll_period_us": 10000, 00:22:32.301 "nvme_ioq_poll_period_us": 0, 00:22:32.301 "io_queue_requests": 0, 00:22:32.301 "delay_cmd_submit": true, 00:22:32.301 "transport_retry_count": 4, 00:22:32.301 "bdev_retry_count": 3, 00:22:32.301 "transport_ack_timeout": 0, 00:22:32.301 "ctrlr_loss_timeout_sec": 0, 00:22:32.301 "reconnect_delay_sec": 0, 00:22:32.301 "fast_io_fail_timeout_sec": 0, 00:22:32.301 "disable_auto_failback": false, 00:22:32.301 "generate_uuids": false, 00:22:32.301 "transport_tos": 0, 00:22:32.301 "nvme_error_stat": false, 00:22:32.301 "rdma_srq_size": 0, 00:22:32.301 "io_path_stat": false, 00:22:32.301 "allow_accel_sequence": false, 00:22:32.301 "rdma_max_cq_size": 0, 00:22:32.301 "rdma_cm_event_timeout_ms": 0, 00:22:32.301 "dhchap_digests": [ 00:22:32.301 "sha256", 00:22:32.301 "sha384", 00:22:32.301 "sha512" 00:22:32.301 ], 00:22:32.301 "dhchap_dhgroups": [ 00:22:32.301 "null", 00:22:32.301 "ffdhe2048", 00:22:32.301 "ffdhe3072", 00:22:32.301 "ffdhe4096", 00:22:32.301 "ffdhe6144", 00:22:32.301 "ffdhe8192" 00:22:32.301 ] 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "bdev_nvme_set_hotplug", 00:22:32.301 "params": { 00:22:32.301 "period_us": 100000, 00:22:32.301 "enable": false 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "bdev_malloc_create", 00:22:32.301 "params": { 00:22:32.301 "name": "malloc0", 00:22:32.301 "num_blocks": 8192, 00:22:32.301 "block_size": 4096, 00:22:32.301 "physical_block_size": 4096, 00:22:32.301 "uuid": "9500e5a1-a887-4052-b8c4-c880104719f2", 00:22:32.301 "optimal_io_boundary": 0 00:22:32.301 } 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "method": "bdev_wait_for_examine" 00:22:32.301 } 00:22:32.301 ] 00:22:32.301 }, 00:22:32.301 { 00:22:32.301 "subsystem": "nbd", 00:22:32.301 "config": [] 00:22:32.301 }, 00:22:32.301 { 00:22:32.302 "subsystem": "scheduler", 00:22:32.302 "config": [ 00:22:32.302 { 00:22:32.302 "method": "framework_set_scheduler", 00:22:32.302 "params": { 00:22:32.302 "name": "static" 00:22:32.302 } 00:22:32.302 } 00:22:32.302 ] 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "subsystem": "nvmf", 00:22:32.302 "config": [ 00:22:32.302 { 00:22:32.302 "method": "nvmf_set_config", 00:22:32.302 "params": { 00:22:32.302 "discovery_filter": "match_any", 00:22:32.302 "admin_cmd_passthru": { 00:22:32.302 "identify_ctrlr": false 00:22:32.302 } 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_set_max_subsystems", 00:22:32.302 "params": { 00:22:32.302 "max_subsystems": 1024 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_set_crdt", 00:22:32.302 "params": { 00:22:32.302 "crdt1": 0, 00:22:32.302 "crdt2": 0, 00:22:32.302 "crdt3": 0 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_create_transport", 00:22:32.302 "params": { 00:22:32.302 "trtype": "TCP", 00:22:32.302 "max_queue_depth": 128, 00:22:32.302 "max_io_qpairs_per_ctrlr": 127, 00:22:32.302 "in_capsule_data_size": 4096, 00:22:32.302 "max_io_size": 131072, 00:22:32.302 "io_unit_size": 131072, 00:22:32.302 "max_aq_depth": 128, 00:22:32.302 "num_shared_buffers": 511, 00:22:32.302 "buf_cache_size": 4294967295, 00:22:32.302 "dif_insert_or_strip": false, 00:22:32.302 "zcopy": false, 00:22:32.302 "c2h_success": false, 00:22:32.302 "sock_priority": 0, 00:22:32.302 "abort_timeout_sec": 1, 00:22:32.302 "ack_timeout": 0, 00:22:32.302 "data_wr_pool_size": 0 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_create_subsystem", 00:22:32.302 "params": { 00:22:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.302 "allow_any_host": false, 00:22:32.302 "serial_number": "00000000000000000000", 00:22:32.302 "model_number": "SPDK bdev Controller", 00:22:32.302 "max_namespaces": 32, 00:22:32.302 "min_cntlid": 1, 00:22:32.302 "max_cntlid": 65519, 00:22:32.302 "ana_reporting": false 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_subsystem_add_host", 00:22:32.302 "params": { 00:22:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.302 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.302 "psk": "key0" 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_subsystem_add_ns", 00:22:32.302 "params": { 00:22:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.302 "namespace": { 00:22:32.302 "nsid": 1, 00:22:32.302 "bdev_name": "malloc0", 00:22:32.302 "nguid": "9500E5A1A8874052B8C4C880104719F2", 00:22:32.302 "uuid": "9500e5a1-a887-4052-b8c4-c880104719f2", 00:22:32.302 "no_auto_visible": false 00:22:32.302 } 00:22:32.302 } 00:22:32.302 }, 00:22:32.302 { 00:22:32.302 "method": "nvmf_subsystem_add_listener", 00:22:32.302 "params": { 00:22:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.302 "listen_address": { 00:22:32.302 "trtype": "TCP", 00:22:32.302 "adrfam": "IPv4", 00:22:32.302 "traddr": "10.0.0.2", 00:22:32.302 "trsvcid": "4420" 00:22:32.302 }, 00:22:32.302 "secure_channel": true 00:22:32.302 } 00:22:32.302 } 00:22:32.302 ] 00:22:32.302 } 00:22:32.302 ] 00:22:32.302 }' 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73157 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73157 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 73157 ']' 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:32.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:32.302 09:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.302 [2024-05-15 09:15:44.635887] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:32.302 [2024-05-15 09:15:44.635966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.560 [2024-05-15 09:15:44.773363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.560 [2024-05-15 09:15:44.875314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.560 [2024-05-15 09:15:44.875371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.560 [2024-05-15 09:15:44.875383] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.560 [2024-05-15 09:15:44.875394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.560 [2024-05-15 09:15:44.875402] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.560 [2024-05-15 09:15:44.875491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.817 [2024-05-15 09:15:45.090304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.817 [2024-05-15 09:15:45.122229] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:32.817 [2024-05-15 09:15:45.122306] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.817 [2024-05-15 09:15:45.122465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=73189 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 73189 /var/tmp/bdevperf.sock 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 73189 ']' 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:33.435 09:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:33.435 "subsystems": [ 00:22:33.435 { 00:22:33.435 "subsystem": "keyring", 00:22:33.435 "config": [ 00:22:33.435 { 00:22:33.435 "method": "keyring_file_add_key", 00:22:33.435 "params": { 00:22:33.435 "name": "key0", 00:22:33.435 "path": "/tmp/tmp.6Yv0ygeGUW" 00:22:33.435 } 00:22:33.435 } 00:22:33.435 ] 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "subsystem": "iobuf", 00:22:33.435 "config": [ 00:22:33.435 { 00:22:33.435 "method": "iobuf_set_options", 00:22:33.435 "params": { 00:22:33.435 "small_pool_count": 8192, 00:22:33.435 "large_pool_count": 1024, 00:22:33.435 "small_bufsize": 8192, 00:22:33.435 "large_bufsize": 135168 00:22:33.435 } 00:22:33.435 } 00:22:33.435 ] 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "subsystem": "sock", 00:22:33.435 "config": [ 00:22:33.435 { 00:22:33.435 "method": "sock_impl_set_options", 00:22:33.435 "params": { 00:22:33.435 "impl_name": "uring", 00:22:33.435 "recv_buf_size": 2097152, 00:22:33.435 "send_buf_size": 2097152, 00:22:33.435 "enable_recv_pipe": true, 00:22:33.435 "enable_quickack": false, 00:22:33.435 "enable_placement_id": 0, 00:22:33.435 "enable_zerocopy_send_server": false, 00:22:33.435 "enable_zerocopy_send_client": false, 00:22:33.435 "zerocopy_threshold": 0, 00:22:33.435 "tls_version": 0, 00:22:33.435 "enable_ktls": false 00:22:33.435 } 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "method": "sock_impl_set_options", 00:22:33.435 "params": { 00:22:33.435 "impl_name": "posix", 00:22:33.435 "recv_buf_size": 2097152, 00:22:33.435 "send_buf_size": 2097152, 00:22:33.435 "enable_recv_pipe": true, 00:22:33.435 "enable_quickack": false, 00:22:33.435 "enable_placement_id": 0, 00:22:33.435 "enable_zerocopy_send_server": true, 00:22:33.435 "enable_zerocopy_send_client": false, 00:22:33.435 "zerocopy_threshold": 0, 00:22:33.435 "tls_version": 0, 00:22:33.435 "enable_ktls": false 00:22:33.435 } 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "method": "sock_impl_set_options", 00:22:33.435 "params": { 00:22:33.435 "impl_name": "ssl", 00:22:33.435 "recv_buf_size": 4096, 00:22:33.435 "send_buf_size": 4096, 00:22:33.435 "enable_recv_pipe": true, 00:22:33.435 "enable_quickack": false, 00:22:33.435 "enable_placement_id": 0, 00:22:33.435 "enable_zerocopy_send_server": true, 00:22:33.435 "enable_zerocopy_send_client": false, 00:22:33.435 "zerocopy_threshold": 0, 00:22:33.435 "tls_version": 0, 00:22:33.435 "enable_ktls": false 00:22:33.435 } 00:22:33.435 } 00:22:33.435 ] 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "subsystem": "vmd", 00:22:33.435 "config": [] 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "subsystem": "accel", 00:22:33.435 "config": [ 00:22:33.435 { 00:22:33.435 "method": "accel_set_options", 00:22:33.435 "params": { 00:22:33.435 "small_cache_size": 128, 00:22:33.435 "large_cache_size": 16, 00:22:33.435 "task_count": 2048, 00:22:33.435 "sequence_count": 2048, 00:22:33.435 "buf_count": 2048 00:22:33.435 } 00:22:33.435 } 00:22:33.435 ] 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "subsystem": "bdev", 00:22:33.435 "config": [ 00:22:33.435 { 00:22:33.435 "method": "bdev_set_options", 00:22:33.435 "params": { 00:22:33.435 "bdev_io_pool_size": 65535, 00:22:33.435 "bdev_io_cache_size": 256, 00:22:33.435 "bdev_auto_examine": true, 00:22:33.435 "iobuf_small_cache_size": 128, 00:22:33.435 "iobuf_large_cache_size": 16 00:22:33.435 } 00:22:33.435 }, 00:22:33.435 { 00:22:33.435 "method": "bdev_raid_set_options", 00:22:33.435 "params": { 00:22:33.435 "process_window_size_kb": 1024 00:22:33.435 } 00:22:33.435 }, 00:22:33.435 { 00:22:33.436 "method": "bdev_iscsi_set_options", 00:22:33.436 "params": { 00:22:33.436 "timeout_sec": 30 00:22:33.436 } 00:22:33.436 }, 00:22:33.436 { 00:22:33.436 "method": "bdev_nvme_set_options", 00:22:33.436 "params": { 00:22:33.436 "action_on_timeout": "none", 00:22:33.436 "timeout_us": 0, 00:22:33.436 "timeout_admin_us": 0, 00:22:33.436 "keep_alive_timeout_ms": 10000, 00:22:33.436 "arbitration_burst": 0, 00:22:33.436 "low_priority_weight": 0, 00:22:33.436 "medium_priority_weight": 0, 00:22:33.436 "high_priority_weight": 0, 00:22:33.436 "nvme_adminq_poll_period_us": 10000, 00:22:33.436 "nvme_ioq_poll_period_us": 0, 00:22:33.436 "io_queue_requests": 512, 00:22:33.436 "delay_cmd_submit": true, 00:22:33.436 "transport_retry_count": 4, 00:22:33.436 "bdev_retry_count": 3, 00:22:33.436 "transport_ack_timeout": 0, 00:22:33.436 "ctrlr_loss_timeout_sec": 0, 00:22:33.436 "reconnect_delay_sec": 0, 00:22:33.436 "fast_io_fail_timeout_sec": 0, 00:22:33.436 "disable_auto_failback": false, 00:22:33.436 "generate_uuids": false, 00:22:33.436 "transport_tos": 0, 00:22:33.436 "nvme_error_stat": false, 00:22:33.436 "rdma_srq_size": 0, 00:22:33.436 "io_path_stat": false, 00:22:33.436 "allow_accel_sequence": false, 00:22:33.436 "rdma_max_cq_size": 0, 00:22:33.436 "rdma_cm_event_timeout_ms": 0, 00:22:33.436 "dhchap_digests": [ 00:22:33.436 "sha256", 00:22:33.436 "sha384", 00:22:33.436 "sha512" 00:22:33.436 ], 00:22:33.436 "dhchap_dhgroups": [ 00:22:33.436 "null", 00:22:33.436 "ffdhe2048", 00:22:33.436 "ffdhe3072", 00:22:33.436 "ffdhe4096", 00:22:33.436 "ffdhe6144", 00:22:33.436 "ffdhe8192" 00:22:33.436 ] 00:22:33.436 } 00:22:33.436 }, 00:22:33.436 { 00:22:33.436 "method": "bdev_nvme_attach_controller", 00:22:33.436 "params": { 00:22:33.436 "name": "nvme0", 00:22:33.436 "trtype": "TCP", 00:22:33.436 "adrfam": "IPv4", 00:22:33.436 "traddr": "10.0.0.2", 00:22:33.436 "trsvcid": "4420", 00:22:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.436 "prchk_reftag": false, 00:22:33.436 "prchk_guard": false, 00:22:33.436 "ctrlr_loss_timeout_sec": 0, 00:22:33.436 "reconnect_delay_sec": 0, 00:22:33.436 "fast_io_fail_timeout_sec": 0, 00:22:33.436 "psk": "key0", 00:22:33.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.436 "hdgst": false, 00:22:33.436 "ddgst": false 00:22:33.436 } 00:22:33.436 }, 00:22:33.436 { 00:22:33.436 "method": "bdev_nvme_set_hotplug", 00:22:33.436 "params": { 00:22:33.436 "period_us": 100000, 00:22:33.436 "enable": false 00:22:33.436 } 00:22:33.436 }, 00:22:33.436 { 00:22:33.436 "method": "bdev_enable_histogram", 00:22:33.436 "params": { 00:22:33.436 "name": "nvme0n1", 00:22:33.436 "enable": true 00:22:33.436 } 00:22:33.436 }, 00:22:33.436 { 00:22:33.436 "method": "bdev_wait_for_examine" 00:22:33.436 } 00:22:33.436 ] 00:22:33.436 }, 00:22:33.436 { 00:22:33.436 "subsystem": "nbd", 00:22:33.436 "config": [] 00:22:33.436 } 00:22:33.436 ] 00:22:33.436 }' 00:22:33.436 [2024-05-15 09:15:45.730624] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:33.436 [2024-05-15 09:15:45.731029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73189 ] 00:22:33.436 [2024-05-15 09:15:45.875861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.694 [2024-05-15 09:15:45.981234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.952 [2024-05-15 09:15:46.143256] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.519 09:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:34.519 09:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:34.519 09:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:34.519 09:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:34.777 09:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.777 09:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.777 Running I/O for 1 seconds... 00:22:36.152 00:22:36.152 Latency(us) 00:22:36.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.152 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:36.152 Verification LBA range: start 0x0 length 0x2000 00:22:36.152 nvme0n1 : 1.01 5628.37 21.99 0.00 0.00 22576.51 4025.78 17476.27 00:22:36.152 =================================================================================================================== 00:22:36.152 Total : 5628.37 21.99 0.00 0.00 22576.51 4025.78 17476.27 00:22:36.152 0 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:36.152 nvmf_trace.0 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 73189 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 73189 ']' 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 73189 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73189 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:36.152 killing process with pid 73189 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73189' 00:22:36.152 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.152 00:22:36.152 Latency(us) 00:22:36.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.152 =================================================================================================================== 00:22:36.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 73189 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 73189 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.152 rmmod nvme_tcp 00:22:36.152 rmmod nvme_fabrics 00:22:36.152 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73157 ']' 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73157 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 73157 ']' 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 73157 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73157 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:36.411 killing process with pid 73157 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73157' 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 73157 00:22:36.411 [2024-05-15 09:15:48.630896] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:36.411 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 73157 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zwj9DiJZQl /tmp/tmp.HHYghtMkqL /tmp/tmp.6Yv0ygeGUW 00:22:36.670 00:22:36.670 real 1m26.643s 00:22:36.670 user 2m21.643s 00:22:36.670 sys 0m26.073s 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:36.670 ************************************ 00:22:36.670 END TEST nvmf_tls 00:22:36.670 ************************************ 00:22:36.670 09:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 09:15:48 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:36.670 09:15:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:36.670 09:15:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:36.670 09:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 ************************************ 00:22:36.670 START TEST nvmf_fips 00:22:36.670 ************************************ 00:22:36.670 09:15:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:36.670 * Looking for test storage... 00:22:36.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:36.670 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:36.671 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:36.671 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.671 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:36.671 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:36.671 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:36.929 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:22:36.930 Error setting digest 00:22:36.930 00A2F8886C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:36.930 00A2F8886C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:36.930 Cannot find device "nvmf_tgt_br" 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:36.930 Cannot find device "nvmf_tgt_br2" 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:36.930 Cannot find device "nvmf_tgt_br" 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:36.930 Cannot find device "nvmf_tgt_br2" 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:22:36.930 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.189 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:37.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:37.447 00:22:37.447 --- 10.0.0.2 ping statistics --- 00:22:37.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.447 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:37.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:22:37.447 00:22:37.447 --- 10.0.0.3 ping statistics --- 00:22:37.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.447 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:37.447 00:22:37.447 --- 10.0.0.1 ping statistics --- 00:22:37.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.447 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73455 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73455 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 73455 ']' 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:37.447 09:15:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:37.447 [2024-05-15 09:15:49.772427] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:37.447 [2024-05-15 09:15:49.772818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.785 [2024-05-15 09:15:49.920566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.785 [2024-05-15 09:15:50.043917] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.785 [2024-05-15 09:15:50.044271] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.785 [2024-05-15 09:15:50.044465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.785 [2024-05-15 09:15:50.044698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.785 [2024-05-15 09:15:50.044863] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.785 [2024-05-15 09:15:50.044983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:38.356 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:38.616 [2024-05-15 09:15:50.862206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.616 [2024-05-15 09:15:50.878139] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:38.616 [2024-05-15 09:15:50.878450] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.616 [2024-05-15 09:15:50.878751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.616 [2024-05-15 09:15:50.907488] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.616 malloc0 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73489 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73489 /var/tmp/bdevperf.sock 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 73489 ']' 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:38.616 09:15:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.616 [2024-05-15 09:15:51.002680] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:38.616 [2024-05-15 09:15:51.003011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73489 ] 00:22:38.875 [2024-05-15 09:15:51.139319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.875 [2024-05-15 09:15:51.240074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.813 09:15:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:39.813 09:15:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:22:39.813 09:15:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:39.813 [2024-05-15 09:15:52.214087] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.813 [2024-05-15 09:15:52.214422] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:40.070 TLSTESTn1 00:22:40.070 09:15:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.070 Running I/O for 10 seconds... 00:22:50.090 00:22:50.090 Latency(us) 00:22:50.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.090 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.090 Verification LBA range: start 0x0 length 0x2000 00:22:50.090 TLSTESTn1 : 10.01 5137.12 20.07 0.00 0.00 24878.92 2418.59 22219.82 00:22:50.090 =================================================================================================================== 00:22:50.090 Total : 5137.12 20.07 0.00 0.00 24878.92 2418.59 22219.82 00:22:50.090 0 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:22:50.090 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:50.090 nvmf_trace.0 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73489 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 73489 ']' 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 73489 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73489 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:50.348 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:50.348 killing process with pid 73489 00:22:50.348 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.348 00:22:50.348 Latency(us) 00:22:50.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.349 =================================================================================================================== 00:22:50.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.349 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73489' 00:22:50.349 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 73489 00:22:50.349 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 73489 00:22:50.349 [2024-05-15 09:16:02.574740] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.607 rmmod nvme_tcp 00:22:50.607 rmmod nvme_fabrics 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73455 ']' 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73455 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 73455 ']' 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 73455 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73455 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:50.607 killing process with pid 73455 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73455' 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 73455 00:22:50.607 [2024-05-15 09:16:02.903738] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:50.607 [2024-05-15 09:16:02.903783] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.607 09:16:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 73455 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:22:50.868 00:22:50.868 real 0m14.222s 00:22:50.868 user 0m20.551s 00:22:50.868 sys 0m5.214s 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:50.868 09:16:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:50.868 ************************************ 00:22:50.868 END TEST nvmf_fips 00:22:50.868 ************************************ 00:22:50.868 09:16:03 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:50.868 09:16:03 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:22:50.868 09:16:03 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.868 09:16:03 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.868 09:16:03 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 1 -eq 0 ]] 00:22:50.868 09:16:03 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:50.868 09:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.868 ************************************ 00:22:50.868 START TEST nvmf_identify 00:22:50.868 ************************************ 00:22:50.868 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:51.131 * Looking for test storage... 00:22:51.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:51.131 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.131 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:51.131 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:51.132 Cannot find device "nvmf_tgt_br" 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.132 Cannot find device "nvmf_tgt_br2" 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:51.132 Cannot find device "nvmf_tgt_br" 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:51.132 Cannot find device "nvmf_tgt_br2" 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:51.132 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:51.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:51.390 00:22:51.390 --- 10.0.0.2 ping statistics --- 00:22:51.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.390 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:51.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:22:51.390 00:22:51.390 --- 10.0.0.3 ping statistics --- 00:22:51.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.390 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:22:51.390 00:22:51.390 --- 10.0.0.1 ping statistics --- 00:22:51.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.390 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.390 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73839 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73839 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 73839 ']' 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:51.391 09:16:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.648 [2024-05-15 09:16:03.869862] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:51.648 [2024-05-15 09:16:03.869960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.648 [2024-05-15 09:16:04.014802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.905 [2024-05-15 09:16:04.145531] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.905 [2024-05-15 09:16:04.145601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.905 [2024-05-15 09:16:04.145616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.905 [2024-05-15 09:16:04.145629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.905 [2024-05-15 09:16:04.145640] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.906 [2024-05-15 09:16:04.145740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.906 [2024-05-15 09:16:04.146044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.906 [2024-05-15 09:16:04.146505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.906 [2024-05-15 09:16:04.146522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.471 [2024-05-15 09:16:04.868286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:52.471 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 Malloc0 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 [2024-05-15 09:16:04.974792] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:52.728 [2024-05-15 09:16:04.975228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.728 09:16:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.728 [ 00:22:52.728 { 00:22:52.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:52.728 "subtype": "Discovery", 00:22:52.728 "listen_addresses": [ 00:22:52.728 { 00:22:52.728 "trtype": "TCP", 00:22:52.728 "adrfam": "IPv4", 00:22:52.728 "traddr": "10.0.0.2", 00:22:52.728 "trsvcid": "4420" 00:22:52.728 } 00:22:52.728 ], 00:22:52.728 "allow_any_host": true, 00:22:52.728 "hosts": [] 00:22:52.728 }, 00:22:52.728 { 00:22:52.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.728 "subtype": "NVMe", 00:22:52.728 "listen_addresses": [ 00:22:52.728 { 00:22:52.728 "trtype": "TCP", 00:22:52.728 "adrfam": "IPv4", 00:22:52.728 "traddr": "10.0.0.2", 00:22:52.728 "trsvcid": "4420" 00:22:52.728 } 00:22:52.728 ], 00:22:52.728 "allow_any_host": true, 00:22:52.728 "hosts": [], 00:22:52.728 "serial_number": "SPDK00000000000001", 00:22:52.728 "model_number": "SPDK bdev Controller", 00:22:52.728 "max_namespaces": 32, 00:22:52.728 "min_cntlid": 1, 00:22:52.728 "max_cntlid": 65519, 00:22:52.728 "namespaces": [ 00:22:52.728 { 00:22:52.728 "nsid": 1, 00:22:52.728 "bdev_name": "Malloc0", 00:22:52.728 "name": "Malloc0", 00:22:52.728 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:52.728 "eui64": "ABCDEF0123456789", 00:22:52.728 "uuid": "fcfa5a3b-d9ba-4212-82a2-6eb02f0239e0" 00:22:52.728 } 00:22:52.728 ] 00:22:52.728 } 00:22:52.728 ] 00:22:52.728 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.728 09:16:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:52.728 [2024-05-15 09:16:05.026854] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:52.728 [2024-05-15 09:16:05.026907] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73874 ] 00:22:52.994 [2024-05-15 09:16:05.160906] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:52.994 [2024-05-15 09:16:05.173791] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.994 [2024-05-15 09:16:05.173884] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.994 [2024-05-15 09:16:05.173933] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.994 [2024-05-15 09:16:05.174003] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:22:52.994 [2024-05-15 09:16:05.174200] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:52.994 [2024-05-15 09:16:05.174355] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1919280 0 00:22:52.994 [2024-05-15 09:16:05.200599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.994 [2024-05-15 09:16:05.200807] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.994 [2024-05-15 09:16:05.200888] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.994 [2024-05-15 09:16:05.200964] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.994 [2024-05-15 09:16:05.201059] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.201107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.201144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.994 [2024-05-15 09:16:05.201219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.994 [2024-05-15 09:16:05.201299] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.994 [2024-05-15 09:16:05.217080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.994 [2024-05-15 09:16:05.217289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.994 [2024-05-15 09:16:05.217363] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.217437] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.994 [2024-05-15 09:16:05.217539] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.994 [2024-05-15 09:16:05.217650] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:52.994 [2024-05-15 09:16:05.217741] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:52.994 [2024-05-15 09:16:05.217864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.217924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.217957] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.994 [2024-05-15 09:16:05.218033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.994 [2024-05-15 09:16:05.218153] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.994 [2024-05-15 09:16:05.218294] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.994 [2024-05-15 09:16:05.218326] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.994 [2024-05-15 09:16:05.218482] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.218520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.994 [2024-05-15 09:16:05.218679] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:52.994 [2024-05-15 09:16:05.218736] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:52.994 [2024-05-15 09:16:05.218828] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.218861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.218888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.994 [2024-05-15 09:16:05.218921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.994 [2024-05-15 09:16:05.219020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.994 [2024-05-15 09:16:05.219083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.994 [2024-05-15 09:16:05.219154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.994 [2024-05-15 09:16:05.219187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.219215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.994 [2024-05-15 09:16:05.219314] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:52.994 [2024-05-15 09:16:05.219369] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.994 [2024-05-15 09:16:05.219457] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.219524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.219568] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.994 [2024-05-15 09:16:05.219629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.994 [2024-05-15 09:16:05.219768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.994 [2024-05-15 09:16:05.219841] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.994 [2024-05-15 09:16:05.219876] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.994 [2024-05-15 09:16:05.219935] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.219968] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.994 [2024-05-15 09:16:05.220112] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.994 [2024-05-15 09:16:05.220172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.994 [2024-05-15 09:16:05.220200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.220263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.220300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.995 [2024-05-15 09:16:05.220367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.995 [2024-05-15 09:16:05.220445] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.995 [2024-05-15 09:16:05.220475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.995 [2024-05-15 09:16:05.220536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.220584] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.995 [2024-05-15 09:16:05.220655] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:52.995 [2024-05-15 09:16:05.220798] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:52.995 [2024-05-15 09:16:05.220855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.995 [2024-05-15 09:16:05.221048] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:52.995 [2024-05-15 09:16:05.221155] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.995 [2024-05-15 09:16:05.221249] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.221283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.221310] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.221370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.995 [2024-05-15 09:16:05.221444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.995 [2024-05-15 09:16:05.221509] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.995 [2024-05-15 09:16:05.221554] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.995 [2024-05-15 09:16:05.221586] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.221613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.995 [2024-05-15 09:16:05.221704] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.995 [2024-05-15 09:16:05.221812] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.221860] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.221886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.221917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.995 [2024-05-15 09:16:05.221981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.995 [2024-05-15 09:16:05.222053] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.995 [2024-05-15 09:16:05.222102] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.995 [2024-05-15 09:16:05.222129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.222155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.995 [2024-05-15 09:16:05.222205] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.995 [2024-05-15 09:16:05.222344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:52.995 [2024-05-15 09:16:05.222396] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:52.995 [2024-05-15 09:16:05.222459] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.995 [2024-05-15 09:16:05.222598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.222627] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.222658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.995 [2024-05-15 09:16:05.222721] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.995 [2024-05-15 09:16:05.222852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.995 [2024-05-15 09:16:05.222883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.995 [2024-05-15 09:16:05.222910] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.222937] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919280): datao=0, datal=4096, cccid=0 00:22:52.995 [2024-05-15 09:16:05.223039] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1961950) on tqpair(0x1919280): expected_datao=0, payload_size=4096 00:22:52.995 [2024-05-15 09:16:05.223092] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.223125] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.223184] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.223217] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.995 [2024-05-15 09:16:05.223247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.995 [2024-05-15 09:16:05.223308] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.223376] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.995 [2024-05-15 09:16:05.223472] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:52.995 [2024-05-15 09:16:05.223582] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:52.995 [2024-05-15 09:16:05.223638] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:52.995 [2024-05-15 09:16:05.223798] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:52.995 [2024-05-15 09:16:05.223888] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:52.995 [2024-05-15 09:16:05.223982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:52.995 [2024-05-15 09:16:05.224070] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.995 [2024-05-15 09:16:05.224135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.224211] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.224244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.224276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.995 [2024-05-15 09:16:05.224344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.995 [2024-05-15 09:16:05.224422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.995 [2024-05-15 09:16:05.224472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.995 [2024-05-15 09:16:05.224499] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.224526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961950) on tqpair=0x1919280 00:22:52.995 [2024-05-15 09:16:05.224624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.224792] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.224878] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.224946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.995 [2024-05-15 09:16:05.225045] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.225111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.225143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.225202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.995 [2024-05-15 09:16:05.225309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.225368] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.225400] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.225459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.995 [2024-05-15 09:16:05.225567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.225627] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.225659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.225788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.995 [2024-05-15 09:16:05.225902] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.995 [2024-05-15 09:16:05.226007] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.995 [2024-05-15 09:16:05.226069] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.226110] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919280) 00:22:52.995 [2024-05-15 09:16:05.226185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.995 [2024-05-15 09:16:05.226260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961950, cid 0, qid 0 00:22:52.995 [2024-05-15 09:16:05.226321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961ab0, cid 1, qid 0 00:22:52.995 [2024-05-15 09:16:05.226367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961c10, cid 2, qid 0 00:22:52.995 [2024-05-15 09:16:05.226395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.995 [2024-05-15 09:16:05.226423] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961ed0, cid 4, qid 0 00:22:52.995 [2024-05-15 09:16:05.226456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.995 [2024-05-15 09:16:05.226539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.995 [2024-05-15 09:16:05.226615] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.995 [2024-05-15 09:16:05.226683] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961ed0) on tqpair=0x1919280 00:22:52.995 [2024-05-15 09:16:05.226740] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:52.995 [2024-05-15 09:16:05.226823] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:52.995 [2024-05-15 09:16:05.226885] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.226952] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919280) 00:22:52.996 [2024-05-15 09:16:05.226989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.996 [2024-05-15 09:16:05.227056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961ed0, cid 4, qid 0 00:22:52.996 [2024-05-15 09:16:05.227135] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.996 [2024-05-15 09:16:05.227165] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.996 [2024-05-15 09:16:05.227191] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.227252] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919280): datao=0, datal=4096, cccid=4 00:22:52.996 [2024-05-15 09:16:05.227308] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1961ed0) on tqpair(0x1919280): expected_datao=0, payload_size=4096 00:22:52.996 [2024-05-15 09:16:05.227356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.227422] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.227450] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.227483] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.996 [2024-05-15 09:16:05.227556] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.996 [2024-05-15 09:16:05.227643] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.227677] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961ed0) on tqpair=0x1919280 00:22:52.996 [2024-05-15 09:16:05.227824] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:52.996 [2024-05-15 09:16:05.227944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.228052] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919280) 00:22:52.996 [2024-05-15 09:16:05.228091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.996 [2024-05-15 09:16:05.228186] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.228215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.228278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1919280) 00:22:52.996 [2024-05-15 09:16:05.228314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.996 [2024-05-15 09:16:05.228416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961ed0, cid 4, qid 0 00:22:52.996 [2024-05-15 09:16:05.228452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962030, cid 5, qid 0 00:22:52.996 [2024-05-15 09:16:05.228589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.996 [2024-05-15 09:16:05.228625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.996 [2024-05-15 09:16:05.228652] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.228716] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919280): datao=0, datal=1024, cccid=4 00:22:52.996 [2024-05-15 09:16:05.228817] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1961ed0) on tqpair(0x1919280): expected_datao=0, payload_size=1024 00:22:52.996 [2024-05-15 09:16:05.228897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.228930] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.228987] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.229058] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.996 [2024-05-15 09:16:05.229093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.996 [2024-05-15 09:16:05.229150] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.229183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1962030) on tqpair=0x1919280 00:22:52.996 [2024-05-15 09:16:05.229336] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.996 [2024-05-15 09:16:05.229370] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.996 [2024-05-15 09:16:05.229440] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.229472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961ed0) on tqpair=0x1919280 00:22:52.996 [2024-05-15 09:16:05.229551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.229586] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919280) 00:22:52.996 [2024-05-15 09:16:05.229668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.996 [2024-05-15 09:16:05.229739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961ed0, cid 4, qid 0 00:22:52.996 [2024-05-15 09:16:05.229830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.996 [2024-05-15 09:16:05.229871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.996 [2024-05-15 09:16:05.229899] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.229926] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919280): datao=0, datal=3072, cccid=4 00:22:52.996 [2024-05-15 09:16:05.229974] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1961ed0) on tqpair(0x1919280): expected_datao=0, payload_size=3072 00:22:52.996 [2024-05-15 09:16:05.230078] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.230117] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.230145] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.230177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.996 [2024-05-15 09:16:05.230232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.996 [2024-05-15 09:16:05.230259] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.230321] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961ed0) on tqpair=0x1919280 00:22:52.996 [2024-05-15 09:16:05.230418] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.230452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919280) 00:22:52.996 [2024-05-15 09:16:05.230515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.996 [2024-05-15 09:16:05.230639] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961ed0, cid 4, qid 0 00:22:52.996 [2024-05-15 09:16:05.230764] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.996 [2024-05-15 09:16:05.230800] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.996 [2024-05-15 09:16:05.230827] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.230887] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919280): datao=0, datal=8, cccid=4 00:22:52.996 [2024-05-15 09:16:05.230941] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1961ed0) on tqpair(0x1919280): expected_datao=0, payload_size=8 00:22:52.996 [2024-05-15 09:16:05.230989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.231018] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.231084] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.231117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.996 [2024-05-15 09:16:05.231146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.996 [2024-05-15 09:16:05.231173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.996 [2024-05-15 09:16:05.231199] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961ed0) on tqpair=0x1919280 00:22:52.996 ===================================================== 00:22:52.996 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.996 ===================================================== 00:22:52.996 Controller Capabilities/Features 00:22:52.996 ================================ 00:22:52.996 Vendor ID: 0000 00:22:52.996 Subsystem Vendor ID: 0000 00:22:52.996 Serial Number: .................... 00:22:52.996 Model Number: ........................................ 00:22:52.996 Firmware Version: 24.05 00:22:52.996 Recommended Arb Burst: 0 00:22:52.996 IEEE OUI Identifier: 00 00 00 00:22:52.996 Multi-path I/O 00:22:52.996 May have multiple subsystem ports: No 00:22:52.996 May have multiple controllers: No 00:22:52.996 Associated with SR-IOV VF: No 00:22:52.996 Max Data Transfer Size: 131072 00:22:52.996 Max Number of Namespaces: 0 00:22:52.996 Max Number of I/O Queues: 1024 00:22:52.996 NVMe Specification Version (VS): 1.3 00:22:52.996 NVMe Specification Version (Identify): 1.3 00:22:52.996 Maximum Queue Entries: 128 00:22:52.996 Contiguous Queues Required: Yes 00:22:52.996 Arbitration Mechanisms Supported 00:22:52.996 Weighted Round Robin: Not Supported 00:22:52.996 Vendor Specific: Not Supported 00:22:52.996 Reset Timeout: 15000 ms 00:22:52.996 Doorbell Stride: 4 bytes 00:22:52.996 NVM Subsystem Reset: Not Supported 00:22:52.996 Command Sets Supported 00:22:52.996 NVM Command Set: Supported 00:22:52.996 Boot Partition: Not Supported 00:22:52.997 Memory Page Size Minimum: 4096 bytes 00:22:52.997 Memory Page Size Maximum: 4096 bytes 00:22:52.997 Persistent Memory Region: Not Supported 00:22:52.997 Optional Asynchronous Events Supported 00:22:52.997 Namespace Attribute Notices: Not Supported 00:22:52.997 Firmware Activation Notices: Not Supported 00:22:52.997 ANA Change Notices: Not Supported 00:22:52.997 PLE Aggregate Log Change Notices: Not Supported 00:22:52.997 LBA Status Info Alert Notices: Not Supported 00:22:52.997 EGE Aggregate Log Change Notices: Not Supported 00:22:52.997 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.997 Zone Descriptor Change Notices: Not Supported 00:22:52.997 Discovery Log Change Notices: Supported 00:22:52.997 Controller Attributes 00:22:52.997 128-bit Host Identifier: Not Supported 00:22:52.997 Non-Operational Permissive Mode: Not Supported 00:22:52.997 NVM Sets: Not Supported 00:22:52.997 Read Recovery Levels: Not Supported 00:22:52.997 Endurance Groups: Not Supported 00:22:52.997 Predictable Latency Mode: Not Supported 00:22:52.997 Traffic Based Keep ALive: Not Supported 00:22:52.997 Namespace Granularity: Not Supported 00:22:52.997 SQ Associations: Not Supported 00:22:52.997 UUID List: Not Supported 00:22:52.997 Multi-Domain Subsystem: Not Supported 00:22:52.997 Fixed Capacity Management: Not Supported 00:22:52.997 Variable Capacity Management: Not Supported 00:22:52.997 Delete Endurance Group: Not Supported 00:22:52.997 Delete NVM Set: Not Supported 00:22:52.997 Extended LBA Formats Supported: Not Supported 00:22:52.997 Flexible Data Placement Supported: Not Supported 00:22:52.997 00:22:52.997 Controller Memory Buffer Support 00:22:52.997 ================================ 00:22:52.997 Supported: No 00:22:52.997 00:22:52.997 Persistent Memory Region Support 00:22:52.997 ================================ 00:22:52.997 Supported: No 00:22:52.997 00:22:52.997 Admin Command Set Attributes 00:22:52.997 ============================ 00:22:52.997 Security Send/Receive: Not Supported 00:22:52.997 Format NVM: Not Supported 00:22:52.997 Firmware Activate/Download: Not Supported 00:22:52.997 Namespace Management: Not Supported 00:22:52.997 Device Self-Test: Not Supported 00:22:52.997 Directives: Not Supported 00:22:52.997 NVMe-MI: Not Supported 00:22:52.997 Virtualization Management: Not Supported 00:22:52.997 Doorbell Buffer Config: Not Supported 00:22:52.997 Get LBA Status Capability: Not Supported 00:22:52.997 Command & Feature Lockdown Capability: Not Supported 00:22:52.997 Abort Command Limit: 1 00:22:52.997 Async Event Request Limit: 4 00:22:52.997 Number of Firmware Slots: N/A 00:22:52.997 Firmware Slot 1 Read-Only: N/A 00:22:52.997 Firmware Activation Without Reset: N/A 00:22:52.997 Multiple Update Detection Support: N/A 00:22:52.997 Firmware Update Granularity: No Information Provided 00:22:52.997 Per-Namespace SMART Log: No 00:22:52.997 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.997 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.997 Command Effects Log Page: Not Supported 00:22:52.997 Get Log Page Extended Data: Supported 00:22:52.997 Telemetry Log Pages: Not Supported 00:22:52.997 Persistent Event Log Pages: Not Supported 00:22:52.997 Supported Log Pages Log Page: May Support 00:22:52.997 Commands Supported & Effects Log Page: Not Supported 00:22:52.997 Feature Identifiers & Effects Log Page:May Support 00:22:52.997 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.997 Data Area 4 for Telemetry Log: Not Supported 00:22:52.997 Error Log Page Entries Supported: 128 00:22:52.997 Keep Alive: Not Supported 00:22:52.997 00:22:52.997 NVM Command Set Attributes 00:22:52.997 ========================== 00:22:52.997 Submission Queue Entry Size 00:22:52.997 Max: 1 00:22:52.997 Min: 1 00:22:52.997 Completion Queue Entry Size 00:22:52.997 Max: 1 00:22:52.997 Min: 1 00:22:52.997 Number of Namespaces: 0 00:22:52.997 Compare Command: Not Supported 00:22:52.997 Write Uncorrectable Command: Not Supported 00:22:52.997 Dataset Management Command: Not Supported 00:22:52.997 Write Zeroes Command: Not Supported 00:22:52.997 Set Features Save Field: Not Supported 00:22:52.997 Reservations: Not Supported 00:22:52.997 Timestamp: Not Supported 00:22:52.997 Copy: Not Supported 00:22:52.997 Volatile Write Cache: Not Present 00:22:52.997 Atomic Write Unit (Normal): 1 00:22:52.997 Atomic Write Unit (PFail): 1 00:22:52.997 Atomic Compare & Write Unit: 1 00:22:52.997 Fused Compare & Write: Supported 00:22:52.997 Scatter-Gather List 00:22:52.997 SGL Command Set: Supported 00:22:52.997 SGL Keyed: Supported 00:22:52.997 SGL Bit Bucket Descriptor: Not Supported 00:22:52.997 SGL Metadata Pointer: Not Supported 00:22:52.997 Oversized SGL: Not Supported 00:22:52.997 SGL Metadata Address: Not Supported 00:22:52.997 SGL Offset: Supported 00:22:52.997 Transport SGL Data Block: Not Supported 00:22:52.997 Replay Protected Memory Block: Not Supported 00:22:52.997 00:22:52.997 Firmware Slot Information 00:22:52.997 ========================= 00:22:52.997 Active slot: 0 00:22:52.997 00:22:52.997 00:22:52.997 Error Log 00:22:52.997 ========= 00:22:52.997 00:22:52.997 Active Namespaces 00:22:52.997 ================= 00:22:52.997 Discovery Log Page 00:22:52.997 ================== 00:22:52.997 Generation Counter: 2 00:22:52.997 Number of Records: 2 00:22:52.997 Record Format: 0 00:22:52.997 00:22:52.997 Discovery Log Entry 0 00:22:52.997 ---------------------- 00:22:52.997 Transport Type: 3 (TCP) 00:22:52.997 Address Family: 1 (IPv4) 00:22:52.997 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.997 Entry Flags: 00:22:52.997 Duplicate Returned Information: 1 00:22:52.997 Explicit Persistent Connection Support for Discovery: 1 00:22:52.997 Transport Requirements: 00:22:52.997 Secure Channel: Not Required 00:22:52.997 Port ID: 0 (0x0000) 00:22:52.997 Controller ID: 65535 (0xffff) 00:22:52.997 Admin Max SQ Size: 128 00:22:52.997 Transport Service Identifier: 4420 00:22:52.997 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.997 Transport Address: 10.0.0.2 00:22:52.997 Discovery Log Entry 1 00:22:52.997 ---------------------- 00:22:52.997 Transport Type: 3 (TCP) 00:22:52.997 Address Family: 1 (IPv4) 00:22:52.997 Subsystem Type: 2 (NVM Subsystem) 00:22:52.997 Entry Flags: 00:22:52.997 Duplicate Returned Information: 0 00:22:52.997 Explicit Persistent Connection Support for Discovery: 0 00:22:52.997 Transport Requirements: 00:22:52.997 Secure Channel: Not Required 00:22:52.997 Port ID: 0 (0x0000) 00:22:52.997 Controller ID: 65535 (0xffff) 00:22:52.997 Admin Max SQ Size: 128 00:22:52.997 Transport Service Identifier: 4420 00:22:52.997 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:52.997 Transport Address: 10.0.0.2 [2024-05-15 09:16:05.231578] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:52.997 [2024-05-15 09:16:05.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.997 [2024-05-15 09:16:05.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.997 [2024-05-15 09:16:05.231928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.997 [2024-05-15 09:16:05.232036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.997 [2024-05-15 09:16:05.232090] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.997 [2024-05-15 09:16:05.232117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.997 [2024-05-15 09:16:05.232144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.997 [2024-05-15 09:16:05.232219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.997 [2024-05-15 09:16:05.232294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.997 [2024-05-15 09:16:05.232368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.997 [2024-05-15 09:16:05.232400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.997 [2024-05-15 09:16:05.232427] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.997 [2024-05-15 09:16:05.232454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.997 [2024-05-15 09:16:05.232593] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.997 [2024-05-15 09:16:05.232658] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.997 [2024-05-15 09:16:05.232691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.997 [2024-05-15 09:16:05.232759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.997 [2024-05-15 09:16:05.232836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.997 [2024-05-15 09:16:05.232922] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.997 [2024-05-15 09:16:05.232986] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.997 [2024-05-15 09:16:05.233018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.997 [2024-05-15 09:16:05.233083] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.233139] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:52.998 [2024-05-15 09:16:05.233227] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:52.998 [2024-05-15 09:16:05.233286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.233345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.233372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.233435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.233556] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.233621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.233692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.233725] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.233752] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.233850] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.233879] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.233956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.234018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.234089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.234149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.234180] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.234207] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.234233] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.234334] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.234367] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.234394] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.234480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.234564] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.234627] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.234693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.234726] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.234753] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.234860] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.234888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.234914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.234925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.234946] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235015] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235020] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235070] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235128] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235144] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235156] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235166] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235189] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235272] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235289] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235316] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235373] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235402] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235435] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235493] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235504] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235509] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235566] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235645] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235764] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235775] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235792] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235797] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235824] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.235884] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.235907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.235912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235917] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.998 [2024-05-15 09:16:05.235929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.998 [2024-05-15 09:16:05.235938] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.998 [2024-05-15 09:16:05.235946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.998 [2024-05-15 09:16:05.235961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.998 [2024-05-15 09:16:05.236019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.998 [2024-05-15 09:16:05.236026] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.998 [2024-05-15 09:16:05.236030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236035] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236052] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236148] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236153] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236170] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236174] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236197] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236291] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236295] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236439] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236511] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236515] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236655] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236660] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236672] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236677] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236682] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236757] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236789] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236822] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236884] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236889] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.236901] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.236918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.236933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.236981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.236988] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.236992] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.236997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.237008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.237014] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.237018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.237026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.237041] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:52.999 [2024-05-15 09:16:05.237149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.999 [2024-05-15 09:16:05.237156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.999 [2024-05-15 09:16:05.237160] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.237165] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:52.999 [2024-05-15 09:16:05.237177] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.237182] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.999 [2024-05-15 09:16:05.237187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:52.999 [2024-05-15 09:16:05.237195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.999 [2024-05-15 09:16:05.237210] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237376] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237387] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237467] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237483] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237500] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237527] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237594] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237601] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237606] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237611] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237623] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237628] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237706] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237713] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237734] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237744] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237812] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237819] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237824] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237845] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237850] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237872] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.237917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.237928] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.237933] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.237949] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.237959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.237967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.237982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.238033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.238040] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.238045] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.238061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238071] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.238079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.238094] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.238142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.238148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.238153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.238170] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238174] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.238187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.238202] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.238252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.238259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.238264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.238280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238285] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.238297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.238312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.238361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.238367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.238372] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238377] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.238388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.238406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.238421] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.238469] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.238476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.000 [2024-05-15 09:16:05.238481] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238486] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.000 [2024-05-15 09:16:05.238497] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.000 [2024-05-15 09:16:05.238507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.000 [2024-05-15 09:16:05.238514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.000 [2024-05-15 09:16:05.238529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.000 [2024-05-15 09:16:05.238591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.000 [2024-05-15 09:16:05.238602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.238607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238612] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.238624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238629] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.238641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.238657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.238703] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.238710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.238714] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238719] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.238731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.238748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.238763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.238808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.238815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.238820] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238824] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.238836] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238841] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238845] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.238853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.238868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.238919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.238926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.238931] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238936] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.238947] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.238957] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.238965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.238980] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239024] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239031] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239036] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239041] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239057] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239085] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239145] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239150] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239161] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239171] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239273] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239305] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239357] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239366] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239378] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239387] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239461] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239468] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239477] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239489] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239494] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239498] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239521] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239592] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239596] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239622] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239645] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239714] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239719] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239730] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239846] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.001 [2024-05-15 09:16:05.239858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.001 [2024-05-15 09:16:05.239874] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.001 [2024-05-15 09:16:05.239925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.001 [2024-05-15 09:16:05.239933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.001 [2024-05-15 09:16:05.239937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.001 [2024-05-15 09:16:05.239942] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.001 [2024-05-15 09:16:05.239954] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.239958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.239963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.239971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.239986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240029] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240040] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240045] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240057] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240135] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240142] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240147] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240163] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240168] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240173] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240256] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240273] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240353] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240357] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240362] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240406] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240451] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240458] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240479] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240489] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240568] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240583] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240700] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240704] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240794] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240810] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240815] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240827] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240860] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.240908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.240914] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.240919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.240935] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.240946] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.240953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.240968] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.241013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.241021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.241025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.241041] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.241059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.241073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.241119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.241126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.241130] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241135] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.241147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.002 [2024-05-15 09:16:05.241164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.002 [2024-05-15 09:16:05.241179] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.002 [2024-05-15 09:16:05.241233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.002 [2024-05-15 09:16:05.241240] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.002 [2024-05-15 09:16:05.241245] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241250] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.002 [2024-05-15 09:16:05.241261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.002 [2024-05-15 09:16:05.241266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241293] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.241342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.241349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.241354] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241359] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.241370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.241454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.241461] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.241465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241470] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.241482] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.241578] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.241586] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.241590] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.241607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241612] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241640] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.241685] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.241692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.241697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.241713] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.241797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.241804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.241809] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.241825] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241830] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241835] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241857] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.241902] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.241909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.241914] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241919] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.241930] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241935] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.241940] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.241947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.241962] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.242008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.242014] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.242019] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242024] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.242035] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.242052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.242067] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.242112] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.242119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.242123] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.242140] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242145] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.242157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.242172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.242236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.242243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.242247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.242264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242269] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.242281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.242297] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.003 [2024-05-15 09:16:05.242355] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.003 [2024-05-15 09:16:05.242362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.003 [2024-05-15 09:16:05.242366] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242371] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.003 [2024-05-15 09:16:05.242383] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.003 [2024-05-15 09:16:05.242392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.003 [2024-05-15 09:16:05.242400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.003 [2024-05-15 09:16:05.242415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.242478] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.242485] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.242490] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.242506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242511] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242516] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.242523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.242538] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.242617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.242624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.242628] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242633] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.242645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242650] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242655] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.242662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.242678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.242732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.242743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.242748] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242753] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.242764] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242769] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.242782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.242797] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.242861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.242868] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.242873] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242878] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.242890] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242895] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.242899] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.242907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.242922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.242982] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.242992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.242997] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243014] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243019] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243023] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243104] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243116] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243138] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243166] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243221] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243233] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243238] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243265] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243354] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243359] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243481] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243486] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243497] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243530] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243613] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243642] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243647] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243652] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243675] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.243875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.243885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.243890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243895] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.004 [2024-05-15 09:16:05.243907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.004 [2024-05-15 09:16:05.243917] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.004 [2024-05-15 09:16:05.243924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.004 [2024-05-15 09:16:05.243940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.004 [2024-05-15 09:16:05.244009] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.004 [2024-05-15 09:16:05.244020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.004 [2024-05-15 09:16:05.244025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244041] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244132] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244139] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244143] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244159] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244165] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244169] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244272] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244289] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244372] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244388] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244400] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244433] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244494] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244525] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244573] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244646] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244672] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244676] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244764] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244779] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244796] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244801] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244806] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.244896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.244906] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.244911] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244916] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.244927] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.244937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.244945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.244960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.245021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.245028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.245033] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245038] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.245050] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245055] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.245067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.245082] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.245144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.245151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.245156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245161] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.245172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245177] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245182] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.245190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.245205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.245270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.245277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.245281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.245298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245303] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.245315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.245330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.245392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.245399] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.245403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245408] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.245420] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245425] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245430] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.005 [2024-05-15 09:16:05.245437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.005 [2024-05-15 09:16:05.245453] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.005 [2024-05-15 09:16:05.245516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.005 [2024-05-15 09:16:05.245524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.005 [2024-05-15 09:16:05.245528] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.005 [2024-05-15 09:16:05.245553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.005 [2024-05-15 09:16:05.245559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.245571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.245587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.245647] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.245654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.245658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.245674] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.245692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.245707] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.245773] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.245780] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.245785] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245789] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.245801] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245806] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245811] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.245819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.245834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.245894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.245905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.245910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.245926] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.245936] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.245944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.245959] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246039] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246051] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246056] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246061] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246147] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246164] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246176] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246181] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246186] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246272] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246279] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246284] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246289] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246301] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246306] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246310] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246334] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246399] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246411] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246415] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246427] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246436] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246526] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246537] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246549] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246555] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246572] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246608] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246686] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246696] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246707] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246814] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246836] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246864] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.246921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.246932] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.246936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.246953] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246959] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.246963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.246971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.006 [2024-05-15 09:16:05.246986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.006 [2024-05-15 09:16:05.247045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.006 [2024-05-15 09:16:05.247055] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.006 [2024-05-15 09:16:05.247060] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.247065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.006 [2024-05-15 09:16:05.247077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.247082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.006 [2024-05-15 09:16:05.247086] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.006 [2024-05-15 09:16:05.247094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.007 [2024-05-15 09:16:05.247109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.007 [2024-05-15 09:16:05.247167] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.007 [2024-05-15 09:16:05.247174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.007 [2024-05-15 09:16:05.247179] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.007 [2024-05-15 09:16:05.247195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.007 [2024-05-15 09:16:05.247212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.007 [2024-05-15 09:16:05.247228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.007 [2024-05-15 09:16:05.247292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.007 [2024-05-15 09:16:05.247302] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.007 [2024-05-15 09:16:05.247307] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.007 [2024-05-15 09:16:05.247323] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247329] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.007 [2024-05-15 09:16:05.247341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.007 [2024-05-15 09:16:05.247356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.007 [2024-05-15 09:16:05.247417] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.007 [2024-05-15 09:16:05.247424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.007 [2024-05-15 09:16:05.247429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.007 [2024-05-15 09:16:05.247445] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247450] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.247455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.007 [2024-05-15 09:16:05.247463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.007 [2024-05-15 09:16:05.247478] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.007 [2024-05-15 09:16:05.254852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.007 [2024-05-15 09:16:05.254879] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.007 [2024-05-15 09:16:05.254884] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.254889] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.007 [2024-05-15 09:16:05.254908] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.254913] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.254918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919280) 00:22:53.007 [2024-05-15 09:16:05.254929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.007 [2024-05-15 09:16:05.254966] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961d70, cid 3, qid 0 00:22:53.007 [2024-05-15 09:16:05.255156] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.007 [2024-05-15 09:16:05.255168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.007 [2024-05-15 09:16:05.255173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.007 [2024-05-15 09:16:05.255178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1961d70) on tqpair=0x1919280 00:22:53.007 [2024-05-15 09:16:05.255187] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 21 milliseconds 00:22:53.007 00:22:53.007 09:16:05 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:53.007 [2024-05-15 09:16:05.289531] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:53.007 [2024-05-15 09:16:05.289581] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73880 ] 00:22:53.007 [2024-05-15 09:16:05.417661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:53.007 [2024-05-15 09:16:05.417737] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:53.007 [2024-05-15 09:16:05.417744] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:53.007 [2024-05-15 09:16:05.417761] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:53.007 [2024-05-15 09:16:05.417775] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:22:53.007 [2024-05-15 09:16:05.417917] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:53.007 [2024-05-15 09:16:05.417964] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1287280 0 00:22:53.272 [2024-05-15 09:16:05.433573] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:53.272 [2024-05-15 09:16:05.433594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:53.272 [2024-05-15 09:16:05.433599] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:53.272 [2024-05-15 09:16:05.433603] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:53.273 [2024-05-15 09:16:05.433655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.433661] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.433666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.433681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:53.273 [2024-05-15 09:16:05.433711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.459585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.459605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.459610] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.459628] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:53.273 [2024-05-15 09:16:05.459638] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:53.273 [2024-05-15 09:16:05.459644] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:53.273 [2024-05-15 09:16:05.459664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459674] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.459692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.273 [2024-05-15 09:16:05.459718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.459780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.459787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.459791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459796] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.459803] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:53.273 [2024-05-15 09:16:05.459811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:53.273 [2024-05-15 09:16:05.459818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.459834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.273 [2024-05-15 09:16:05.459848] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.459889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.459896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.459900] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459904] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.459911] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:53.273 [2024-05-15 09:16:05.459920] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:53.273 [2024-05-15 09:16:05.459927] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.459936] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.459943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.273 [2024-05-15 09:16:05.459957] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.459997] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.460003] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.460007] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460012] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.460018] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:53.273 [2024-05-15 09:16:05.460028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460048] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460053] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.460060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.273 [2024-05-15 09:16:05.460074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.460120] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.460127] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.460132] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.460143] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:53.273 [2024-05-15 09:16:05.460149] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:53.273 [2024-05-15 09:16:05.460158] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:53.273 [2024-05-15 09:16:05.460264] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:53.273 [2024-05-15 09:16:05.460269] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:53.273 [2024-05-15 09:16:05.460279] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460284] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460288] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.460295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.273 [2024-05-15 09:16:05.460311] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.460354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.460360] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.460365] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.460376] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:53.273 [2024-05-15 09:16:05.460386] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460391] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.460403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.273 [2024-05-15 09:16:05.460418] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.273 [2024-05-15 09:16:05.460468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.273 [2024-05-15 09:16:05.460474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.273 [2024-05-15 09:16:05.460479] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460483] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.273 [2024-05-15 09:16:05.460490] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:53.273 [2024-05-15 09:16:05.460496] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:53.273 [2024-05-15 09:16:05.460505] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:53.273 [2024-05-15 09:16:05.460522] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:53.273 [2024-05-15 09:16:05.460533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.273 [2024-05-15 09:16:05.460538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.273 [2024-05-15 09:16:05.460546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.274 [2024-05-15 09:16:05.460572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.274 [2024-05-15 09:16:05.460660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.274 [2024-05-15 09:16:05.460667] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.274 [2024-05-15 09:16:05.460672] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460677] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=4096, cccid=0 00:22:53.274 [2024-05-15 09:16:05.460683] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cf950) on tqpair(0x1287280): expected_datao=0, payload_size=4096 00:22:53.274 [2024-05-15 09:16:05.460688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460697] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460702] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.274 [2024-05-15 09:16:05.460718] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.274 [2024-05-15 09:16:05.460722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460727] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.274 [2024-05-15 09:16:05.460738] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:53.274 [2024-05-15 09:16:05.460744] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:53.274 [2024-05-15 09:16:05.460761] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:53.274 [2024-05-15 09:16:05.460766] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:53.274 [2024-05-15 09:16:05.460771] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:53.274 [2024-05-15 09:16:05.460777] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.460786] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.460797] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460802] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460806] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.460813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.274 [2024-05-15 09:16:05.460828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.274 [2024-05-15 09:16:05.460871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.274 [2024-05-15 09:16:05.460877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.274 [2024-05-15 09:16:05.460881] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cf950) on tqpair=0x1287280 00:22:53.274 [2024-05-15 09:16:05.460894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460903] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.460909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.274 [2024-05-15 09:16:05.460916] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460920] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460925] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.460931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.274 [2024-05-15 09:16:05.460937] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460942] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460946] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.460952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.274 [2024-05-15 09:16:05.460959] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460963] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.460967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.460973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.274 [2024-05-15 09:16:05.460979] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.460990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.460997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.461008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.274 [2024-05-15 09:16:05.461024] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cf950, cid 0, qid 0 00:22:53.274 [2024-05-15 09:16:05.461030] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfab0, cid 1, qid 0 00:22:53.274 [2024-05-15 09:16:05.461035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfc10, cid 2, qid 0 00:22:53.274 [2024-05-15 09:16:05.461040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.274 [2024-05-15 09:16:05.461045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.274 [2024-05-15 09:16:05.461116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.274 [2024-05-15 09:16:05.461122] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.274 [2024-05-15 09:16:05.461126] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461131] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.274 [2024-05-15 09:16:05.461137] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:53.274 [2024-05-15 09:16:05.461144] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461155] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461162] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.461184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.274 [2024-05-15 09:16:05.461198] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.274 [2024-05-15 09:16:05.461239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.274 [2024-05-15 09:16:05.461245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.274 [2024-05-15 09:16:05.461249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.274 [2024-05-15 09:16:05.461303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461313] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.461332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.274 [2024-05-15 09:16:05.461347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.274 [2024-05-15 09:16:05.461396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.274 [2024-05-15 09:16:05.461403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.274 [2024-05-15 09:16:05.461407] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461411] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=4096, cccid=4 00:22:53.274 [2024-05-15 09:16:05.461416] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cfed0) on tqpair(0x1287280): expected_datao=0, payload_size=4096 00:22:53.274 [2024-05-15 09:16:05.461422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461429] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461433] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.274 [2024-05-15 09:16:05.461447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.274 [2024-05-15 09:16:05.461451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461456] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.274 [2024-05-15 09:16:05.461470] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:53.274 [2024-05-15 09:16:05.461485] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461494] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:53.274 [2024-05-15 09:16:05.461502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.274 [2024-05-15 09:16:05.461506] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.274 [2024-05-15 09:16:05.461513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.274 [2024-05-15 09:16:05.461527] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.274 [2024-05-15 09:16:05.461600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.274 [2024-05-15 09:16:05.461607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.274 [2024-05-15 09:16:05.461611] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461615] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=4096, cccid=4 00:22:53.275 [2024-05-15 09:16:05.461621] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cfed0) on tqpair(0x1287280): expected_datao=0, payload_size=4096 00:22:53.275 [2024-05-15 09:16:05.461626] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461633] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461637] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.461651] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.461655] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461660] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.461675] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461685] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461693] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.461704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.461719] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.275 [2024-05-15 09:16:05.461762] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.275 [2024-05-15 09:16:05.461768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.275 [2024-05-15 09:16:05.461773] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461777] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=4096, cccid=4 00:22:53.275 [2024-05-15 09:16:05.461782] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cfed0) on tqpair(0x1287280): expected_datao=0, payload_size=4096 00:22:53.275 [2024-05-15 09:16:05.461787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461794] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461798] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461806] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.461813] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.461817] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461821] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.461830] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461839] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461848] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461861] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461867] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:53.275 [2024-05-15 09:16:05.461872] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:53.275 [2024-05-15 09:16:05.461878] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:53.275 [2024-05-15 09:16:05.461903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.461914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.461922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.461930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.461937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.275 [2024-05-15 09:16:05.461955] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.275 [2024-05-15 09:16:05.461961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0030, cid 5, qid 0 00:22:53.275 [2024-05-15 09:16:05.462016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.462023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.462027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.462039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.462045] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.462049] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462054] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d0030) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.462065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462069] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0030, cid 5, qid 0 00:22:53.275 [2024-05-15 09:16:05.462138] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.462144] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.462148] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462153] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d0030) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.462164] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462188] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0030, cid 5, qid 0 00:22:53.275 [2024-05-15 09:16:05.462233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.462239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.462243] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462248] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d0030) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.462259] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0030, cid 5, qid 0 00:22:53.275 [2024-05-15 09:16:05.462323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.275 [2024-05-15 09:16:05.462330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.275 [2024-05-15 09:16:05.462334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d0030) on tqpair=0x1287280 00:22:53.275 [2024-05-15 09:16:05.462351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462356] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1287280) 00:22:53.275 [2024-05-15 09:16:05.462418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.275 [2024-05-15 09:16:05.462432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0030, cid 5, qid 0 00:22:53.275 [2024-05-15 09:16:05.462438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfed0, cid 4, qid 0 00:22:53.275 [2024-05-15 09:16:05.462443] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d0190, cid 6, qid 0 00:22:53.275 [2024-05-15 09:16:05.462448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d02f0, cid 7, qid 0 00:22:53.275 [2024-05-15 09:16:05.462565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.275 [2024-05-15 09:16:05.462571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.275 [2024-05-15 09:16:05.462576] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.275 [2024-05-15 09:16:05.462580] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=8192, cccid=5 00:22:53.275 [2024-05-15 09:16:05.462585] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d0030) on tqpair(0x1287280): expected_datao=0, payload_size=8192 00:22:53.275 [2024-05-15 09:16:05.462591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462607] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462611] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.276 [2024-05-15 09:16:05.462624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.276 [2024-05-15 09:16:05.462628] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462632] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=512, cccid=4 00:22:53.276 [2024-05-15 09:16:05.462637] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12cfed0) on tqpair(0x1287280): expected_datao=0, payload_size=512 00:22:53.276 [2024-05-15 09:16:05.462643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462649] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462653] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.276 [2024-05-15 09:16:05.462665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.276 [2024-05-15 09:16:05.462669] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462673] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=512, cccid=6 00:22:53.276 [2024-05-15 09:16:05.462679] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d0190) on tqpair(0x1287280): expected_datao=0, payload_size=512 00:22:53.276 [2024-05-15 09:16:05.462684] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462690] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462695] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462701] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.276 [2024-05-15 09:16:05.462707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.276 [2024-05-15 09:16:05.462711] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462715] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287280): datao=0, datal=4096, cccid=7 00:22:53.276 [2024-05-15 09:16:05.462720] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12d02f0) on tqpair(0x1287280): expected_datao=0, payload_size=4096 00:22:53.276 [2024-05-15 09:16:05.462725] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462732] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462736] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.276 [2024-05-15 09:16:05.462748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.276 [2024-05-15 09:16:05.462752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d0030) on tqpair=0x1287280 00:22:53.276 [2024-05-15 09:16:05.462773] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.276 [2024-05-15 09:16:05.462780] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.276 [2024-05-15 09:16:05.462784] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.276 [2024-05-15 09:16:05.462788] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfed0) on tqpair=0x1287280 00:22:53.276 [2024-05-15 09:16:05.462799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.276 ===================================================== 00:22:53.276 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.276 ===================================================== 00:22:53.276 Controller Capabilities/Features 00:22:53.276 ================================ 00:22:53.276 Vendor ID: 8086 00:22:53.276 Subsystem Vendor ID: 8086 00:22:53.276 Serial Number: SPDK00000000000001 00:22:53.276 Model Number: SPDK bdev Controller 00:22:53.276 Firmware Version: 24.05 00:22:53.276 Recommended Arb Burst: 6 00:22:53.276 IEEE OUI Identifier: e4 d2 5c 00:22:53.276 Multi-path I/O 00:22:53.276 May have multiple subsystem ports: Yes 00:22:53.276 May have multiple controllers: Yes 00:22:53.276 Associated with SR-IOV VF: No 00:22:53.276 Max Data Transfer Size: 131072 00:22:53.276 Max Number of Namespaces: 32 00:22:53.276 Max Number of I/O Queues: 127 00:22:53.276 NVMe Specification Version (VS): 1.3 00:22:53.276 NVMe Specification Version (Identify): 1.3 00:22:53.276 Maximum Queue Entries: 128 00:22:53.276 Contiguous Queues Required: Yes 00:22:53.276 Arbitration Mechanisms Supported 00:22:53.276 Weighted Round Robin: Not Supported 00:22:53.276 Vendor Specific: Not Supported 00:22:53.276 Reset Timeout: 15000 ms 00:22:53.276 Doorbell Stride: 4 bytes 00:22:53.276 NVM Subsystem Reset: Not Supported 00:22:53.276 Command Sets Supported 00:22:53.276 NVM Command Set: Supported 00:22:53.276 Boot Partition: Not Supported 00:22:53.276 Memory Page Size Minimum: 4096 bytes 00:22:53.276 Memory Page Size Maximum: 4096 bytes 00:22:53.276 Persistent Memory Region: Not Supported 00:22:53.276 Optional Asynchronous Events Supported 00:22:53.276 Namespace Attribute Notices: Supported 00:22:53.276 Firmware Activation Notices: Not Supported 00:22:53.276 ANA Change Notices: Not Supported 00:22:53.276 PLE Aggregate Log Change Notices: Not Supported 00:22:53.276 LBA Status Info Alert Notices: Not Supported 00:22:53.276 EGE Aggregate Log Change Notices: Not Supported 00:22:53.276 Normal NVM Subsystem Shutdown event: Not Supported 00:22:53.276 Zone Descriptor Change Notices: Not Supported 00:22:53.276 Discovery Log Change Notices: Not Supported 00:22:53.276 Controller Attributes 00:22:53.276 128-bit Host Identifier: Supported 00:22:53.276 Non-Operational Permissive Mode: Not Supported 00:22:53.276 NVM Sets: Not Supported 00:22:53.276 Read Recovery Levels: Not Supported 00:22:53.276 Endurance Groups: Not Supported 00:22:53.276 Predictable Latency Mode: Not Supported 00:22:53.276 Traffic Based Keep ALive: Not Supported 00:22:53.276 Namespace Granularity: Not Supported 00:22:53.276 SQ Associations: Not Supported 00:22:53.276 UUID List: Not Supported 00:22:53.276 Multi-Domain Subsystem: Not Supported 00:22:53.276 Fixed Capacity Management: Not Supported 00:22:53.276 Variable Capacity Management: Not Supported 00:22:53.276 Delete Endurance Group: Not Supported 00:22:53.276 Delete NVM Set: Not Supported 00:22:53.276 Extended LBA Formats Supported: Not Supported 00:22:53.276 Flexible Data Placement Supported: Not Supported 00:22:53.276 00:22:53.276 Controller Memory Buffer Support 00:22:53.276 ================================ 00:22:53.276 Supported: No 00:22:53.276 00:22:53.276 Persistent Memory Region Support 00:22:53.276 ================================ 00:22:53.276 Supported: No 00:22:53.276 00:22:53.276 Admin Command Set Attributes 00:22:53.276 ============================ 00:22:53.276 Security Send/Receive: Not Supported 00:22:53.276 Format NVM: Not Supported 00:22:53.276 Firmware Activate/Download: Not Supported 00:22:53.276 Namespace Management: Not Supported 00:22:53.276 Device Self-Test: Not Supported 00:22:53.276 Directives: Not Supported 00:22:53.276 NVMe-MI: Not Supported 00:22:53.276 Virtualization Management: Not Supported 00:22:53.276 Doorbell Buffer Config: Not Supported 00:22:53.276 Get LBA Status Capability: Not Supported 00:22:53.276 Command & Feature Lockdown Capability: Not Supported 00:22:53.276 Abort Command Limit: 4 00:22:53.276 Async Event Request Limit: 4 00:22:53.276 Number of Firmware Slots: N/A 00:22:53.276 Firmware Slot 1 Read-Only: N/A 00:22:53.276 Firmware Activation Without Reset: N/A 00:22:53.276 Multiple Update Detection Support: N/A 00:22:53.276 Firmware Update Granularity: No Information Provided 00:22:53.276 Per-Namespace SMART Log: No 00:22:53.276 Asymmetric Namespace Access Log Page: Not Supported 00:22:53.276 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:53.276 Command Effects Log Page: Supported 00:22:53.276 Get Log Page Extended Data: Supported 00:22:53.276 Telemetry Log Pages: Not Supported 00:22:53.276 Persistent Event Log Pages: Not Supported 00:22:53.276 Supported Log Pages Log Page: May Support 00:22:53.276 Commands Supported & Effects Log Page: Not Supported 00:22:53.276 Feature Identifiers & Effects Log Page:May Support 00:22:53.276 NVMe-MI Commands & Effects Log Page: May Support 00:22:53.276 Data Area 4 for Telemetry Log: Not Supported 00:22:53.276 Error Log Page Entries Supported: 128 00:22:53.276 Keep Alive: Supported 00:22:53.276 Keep Alive Granularity: 10000 ms 00:22:53.276 00:22:53.276 NVM Command Set Attributes 00:22:53.276 ========================== 00:22:53.276 Submission Queue Entry Size 00:22:53.276 Max: 64 00:22:53.276 Min: 64 00:22:53.276 Completion Queue Entry Size 00:22:53.276 Max: 16 00:22:53.276 Min: 16 00:22:53.276 Number of Namespaces: 32 00:22:53.276 Compare Command: Supported 00:22:53.276 Write Uncorrectable Command: Not Supported 00:22:53.276 Dataset Management Command: Supported 00:22:53.276 Write Zeroes Command: Supported 00:22:53.276 Set Features Save Field: Not Supported 00:22:53.277 Reservations: Supported 00:22:53.277 Timestamp: Not Supported 00:22:53.277 Copy: Supported 00:22:53.277 Volatile Write Cache: Present 00:22:53.277 Atomic Write Unit (Normal): 1 00:22:53.277 Atomic Write Unit (PFail): 1 00:22:53.277 Atomic Compare & Write Unit: 1 00:22:53.277 Fused Compare & Write: Supported 00:22:53.277 Scatter-Gather List 00:22:53.277 SGL Command Set: Supported 00:22:53.277 SGL Keyed: Supported 00:22:53.277 SGL Bit Bucket Descriptor: Not Supported 00:22:53.277 SGL Metadata Pointer: Not Supported 00:22:53.277 Oversized SGL: Not Supported 00:22:53.277 SGL Metadata Address: Not Supported 00:22:53.277 SGL Offset: Supported 00:22:53.277 Transport SGL Data Block: Not Supported 00:22:53.277 Replay Protected Memory Block: Not Supported 00:22:53.277 00:22:53.277 Firmware Slot Information 00:22:53.277 ========================= 00:22:53.277 Active slot: 1 00:22:53.277 Slot 1 Firmware Revision: 24.05 00:22:53.277 00:22:53.277 00:22:53.277 Commands Supported and Effects 00:22:53.277 ============================== 00:22:53.277 Admin Commands 00:22:53.277 -------------- 00:22:53.277 Get Log Page (02h): Supported 00:22:53.277 Identify (06h): Supported 00:22:53.277 Abort (08h): Supported 00:22:53.277 Set Features (09h): Supported 00:22:53.277 Get Features (0Ah): Supported 00:22:53.277 Asynchronous Event Request (0Ch): Supported 00:22:53.277 Keep Alive (18h): Supported 00:22:53.277 I/O Commands 00:22:53.277 ------------ 00:22:53.277 Flush (00h): Supported LBA-Change 00:22:53.277 Write (01h): Supported LBA-Change 00:22:53.277 Read (02h): Supported 00:22:53.277 Compare (05h): Supported 00:22:53.277 Write Zeroes (08h): Supported LBA-Change 00:22:53.277 Dataset Management (09h): Supported LBA-Change 00:22:53.277 Copy (19h): Supported LBA-Change 00:22:53.277 Unknown (79h): Supported LBA-Change 00:22:53.277 Unknown (7Ah): Supported 00:22:53.277 00:22:53.277 Error Log 00:22:53.277 ========= 00:22:53.277 00:22:53.277 Arbitration 00:22:53.277 =========== 00:22:53.277 Arbitration Burst: 1 00:22:53.277 00:22:53.277 Power Management 00:22:53.277 ================ 00:22:53.277 Number of Power States: 1 00:22:53.277 Current Power State: Power State #0 00:22:53.277 Power State #0: 00:22:53.277 Max Power: 0.00 W 00:22:53.277 Non-Operational State: Operational 00:22:53.277 Entry Latency: Not Reported 00:22:53.277 Exit Latency: Not Reported 00:22:53.277 Relative Read Throughput: 0 00:22:53.277 Relative Read Latency: 0 00:22:53.277 Relative Write Throughput: 0 00:22:53.277 Relative Write Latency: 0 00:22:53.277 Idle Power: Not Reported 00:22:53.277 Active Power: Not Reported 00:22:53.277 Non-Operational Permissive Mode: Not Supported 00:22:53.277 00:22:53.277 Health Information 00:22:53.277 ================== 00:22:53.277 Critical Warnings: 00:22:53.277 Available Spare Space: OK 00:22:53.277 Temperature: OK 00:22:53.277 Device Reliability: OK 00:22:53.277 Read Only: No 00:22:53.277 Volatile Memory Backup: OK 00:22:53.277 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:53.277 Temperature Threshold: [2024-05-15 09:16:05.462806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.277 [2024-05-15 09:16:05.462810] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.462814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d0190) on tqpair=0x1287280 00:22:53.277 [2024-05-15 09:16:05.462826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.277 [2024-05-15 09:16:05.462832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.277 [2024-05-15 09:16:05.462836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.462840] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d02f0) on tqpair=0x1287280 00:22:53.277 [2024-05-15 09:16:05.462944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.462950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1287280) 00:22:53.277 [2024-05-15 09:16:05.462957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.277 [2024-05-15 09:16:05.462974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12d02f0, cid 7, qid 0 00:22:53.277 [2024-05-15 09:16:05.463016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.277 [2024-05-15 09:16:05.463022] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.277 [2024-05-15 09:16:05.463026] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.463031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12d02f0) on tqpair=0x1287280 00:22:53.277 [2024-05-15 09:16:05.463069] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:53.277 [2024-05-15 09:16:05.463082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.277 [2024-05-15 09:16:05.463090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.277 [2024-05-15 09:16:05.463097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.277 [2024-05-15 09:16:05.463103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.277 [2024-05-15 09:16:05.463112] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.463116] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.463121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.277 [2024-05-15 09:16:05.463127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.277 [2024-05-15 09:16:05.463143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.277 [2024-05-15 09:16:05.463185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.277 [2024-05-15 09:16:05.463191] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.277 [2024-05-15 09:16:05.463195] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.463199] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.277 [2024-05-15 09:16:05.463207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.463212] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.277 [2024-05-15 09:16:05.463216] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.277 [2024-05-15 09:16:05.463222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.277 [2024-05-15 09:16:05.463238] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463337] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:53.278 [2024-05-15 09:16:05.463343] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:53.278 [2024-05-15 09:16:05.463354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.463385] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463431] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463442] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.463488] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463571] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.463598] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463664] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.463715] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463756] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.463813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463863] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463883] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.463914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.463954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.463961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.463965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.463981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463986] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.463990] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.463997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.464011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.464052] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.464058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.464063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.464078] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464083] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464087] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.464094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.464110] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.464148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.464154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.464159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464163] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.464174] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.464191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.464205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.464246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.464253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.464257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.464273] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.278 [2024-05-15 09:16:05.464289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.278 [2024-05-15 09:16:05.464304] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.278 [2024-05-15 09:16:05.464345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.278 [2024-05-15 09:16:05.464351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.278 [2024-05-15 09:16:05.464356] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464360] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.278 [2024-05-15 09:16:05.464371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.278 [2024-05-15 09:16:05.464381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.464402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.464443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.464450] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.464454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.464470] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464479] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.464501] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.464552] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.464559] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.464564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464568] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.464580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.464611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.464649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.464655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.464660] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.464676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.464706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.464760] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.464767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.464771] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.464786] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464790] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464794] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.464815] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.464861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.464867] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.464871] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.464886] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464894] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.464914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.464955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.464961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.464965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.464980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464984] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.464988] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.464995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.465009] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.465055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.465061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.465065] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465069] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.465080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465088] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.465095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.465108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.465146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.465152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.465156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465161] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.465171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465175] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.465186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.465200] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.465246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.465252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.465256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.465271] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465275] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465279] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.465286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.465300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.465339] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.465345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.465349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.465364] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465369] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465373] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.279 [2024-05-15 09:16:05.465380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.279 [2024-05-15 09:16:05.465393] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.279 [2024-05-15 09:16:05.465428] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.279 [2024-05-15 09:16:05.465434] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.279 [2024-05-15 09:16:05.465438] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.279 [2024-05-15 09:16:05.465453] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465458] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.279 [2024-05-15 09:16:05.465462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.465468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.465481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.465530] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.465536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.465540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.465563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.465578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.465593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.465639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.465645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.465649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.465664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.465680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.465693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.465740] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.465746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.465750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.465765] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.465781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.465793] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.465842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.465848] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.465852] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.465867] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465872] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.465882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.465896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.465936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.465942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.465946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465951] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.465961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.465970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.465976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.465990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.466058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.466073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.466086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466139] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.466149] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466154] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.466165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.466178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466233] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466241] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.466252] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466256] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466260] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.466267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.466280] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466327] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466331] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466335] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.466346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466350] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466354] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.466361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.466374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.466437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.466453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.466466] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466517] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466521] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.280 [2024-05-15 09:16:05.466531] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466536] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.280 [2024-05-15 09:16:05.466547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.280 [2024-05-15 09:16:05.466554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.280 [2024-05-15 09:16:05.466568] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.280 [2024-05-15 09:16:05.466609] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.280 [2024-05-15 09:16:05.466615] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.280 [2024-05-15 09:16:05.466619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466624] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.466634] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466643] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.466649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.466663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.466701] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.466707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.466711] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466716] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.466726] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466730] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466735] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.466741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.466755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.466798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.466804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.466808] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.466823] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466827] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.466838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.466851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.466890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.466896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.466900] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466905] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.466915] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.466930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.466944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.466984] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.466990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.466994] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.466999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467009] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467014] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467129] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467167] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467173] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467177] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467181] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467196] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467220] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467287] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467291] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467313] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467366] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467405] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467452] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467461] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467471] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467479] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467499] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467556] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467560] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467571] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467575] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467580] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.281 [2024-05-15 09:16:05.467587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.281 [2024-05-15 09:16:05.467601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.281 [2024-05-15 09:16:05.467638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.281 [2024-05-15 09:16:05.467645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.281 [2024-05-15 09:16:05.467650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.281 [2024-05-15 09:16:05.467665] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.281 [2024-05-15 09:16:05.467669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.467680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.467702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.467764] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.467771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.467775] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.467791] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467800] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.467808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.467823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.467861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.467868] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.467872] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467877] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.467888] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467893] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467897] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.467905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.467919] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.467959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.467966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.467970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.467986] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.467995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468063] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468070] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468074] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.468090] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468095] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468099] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468175] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468180] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.468191] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468196] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468222] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468268] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468280] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468284] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.468296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468301] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468381] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.468397] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468402] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468407] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468481] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468485] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.468501] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468510] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468579] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468586] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468591] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.282 [2024-05-15 09:16:05.468606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.282 [2024-05-15 09:16:05.468616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.282 [2024-05-15 09:16:05.468623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.282 [2024-05-15 09:16:05.468638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.282 [2024-05-15 09:16:05.468684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.282 [2024-05-15 09:16:05.468691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.282 [2024-05-15 09:16:05.468695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.468711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468720] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.468727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.468742] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.468796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.468803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.468807] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468811] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.468822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468827] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.468838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.468851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.468892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.468898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.468902] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468907] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.468917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468922] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.468926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.468933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.468946] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.468989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.468996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469000] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469004] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469015] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469019] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469023] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469115] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469119] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469123] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469182] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469188] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469196] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469211] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469215] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469235] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469271] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469301] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469325] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469383] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469402] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469466] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469476] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469481] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469491] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469579] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469590] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469594] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469619] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469654] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469665] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469688] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.283 [2024-05-15 09:16:05.469695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.283 [2024-05-15 09:16:05.469708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.283 [2024-05-15 09:16:05.469755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.283 [2024-05-15 09:16:05.469761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.283 [2024-05-15 09:16:05.469765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469770] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.283 [2024-05-15 09:16:05.469780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.283 [2024-05-15 09:16:05.469784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.469795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.469808] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.469846] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.469852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.469856] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.469871] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469875] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.469886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.469900] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.469946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.469953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.469957] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469961] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.469972] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.469980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.469987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470001] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470048] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470103] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470196] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470291] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470347] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470386] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470457] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470535] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470557] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470634] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470638] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470643] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470653] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470658] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470726] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470730] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470745] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470749] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470828] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470832] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470842] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470847] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.284 [2024-05-15 09:16:05.470858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.284 [2024-05-15 09:16:05.470872] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.284 [2024-05-15 09:16:05.470910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.284 [2024-05-15 09:16:05.470916] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.284 [2024-05-15 09:16:05.470920] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.284 [2024-05-15 09:16:05.470935] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.284 [2024-05-15 09:16:05.470944] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.470950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.470964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471012] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471027] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471032] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471036] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471108] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471119] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471128] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471156] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471255] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471296] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471307] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471311] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471338] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471348] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471433] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471437] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471449] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471454] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471458] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471530] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471565] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471570] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471606] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471629] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471670] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471677] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471682] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471695] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471707] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471800] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471811] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471816] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471821] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.471893] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.471898] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471902] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.471914] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471918] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.471923] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.471930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.471945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.471994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.472004] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.472009] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.472013] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.472025] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.472030] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.472034] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.285 [2024-05-15 09:16:05.472041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.285 [2024-05-15 09:16:05.472056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.285 [2024-05-15 09:16:05.472099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.285 [2024-05-15 09:16:05.472107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.285 [2024-05-15 09:16:05.472111] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.472116] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.285 [2024-05-15 09:16:05.472127] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.285 [2024-05-15 09:16:05.472132] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472137] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.286 [2024-05-15 09:16:05.472144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.286 [2024-05-15 09:16:05.472159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.286 [2024-05-15 09:16:05.472199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.286 [2024-05-15 09:16:05.472206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.286 [2024-05-15 09:16:05.472211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.286 [2024-05-15 09:16:05.472227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472231] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.286 [2024-05-15 09:16:05.472243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.286 [2024-05-15 09:16:05.472258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.286 [2024-05-15 09:16:05.472295] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.286 [2024-05-15 09:16:05.472302] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.286 [2024-05-15 09:16:05.472306] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472311] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.286 [2024-05-15 09:16:05.472322] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472329] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.286 [2024-05-15 09:16:05.472340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.286 [2024-05-15 09:16:05.472355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.286 [2024-05-15 09:16:05.472398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.286 [2024-05-15 09:16:05.472405] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.286 [2024-05-15 09:16:05.472409] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.286 [2024-05-15 09:16:05.472425] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.286 [2024-05-15 09:16:05.472442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.286 [2024-05-15 09:16:05.472456] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.286 [2024-05-15 09:16:05.472499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.286 [2024-05-15 09:16:05.472506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.286 [2024-05-15 09:16:05.472511] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.286 [2024-05-15 09:16:05.472527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.472536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287280) 00:22:53.286 [2024-05-15 09:16:05.496595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.286 [2024-05-15 09:16:05.496661] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12cfd70, cid 3, qid 0 00:22:53.286 [2024-05-15 09:16:05.496761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.286 [2024-05-15 09:16:05.496769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.286 [2024-05-15 09:16:05.496774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.286 [2024-05-15 09:16:05.496779] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12cfd70) on tqpair=0x1287280 00:22:53.286 [2024-05-15 09:16:05.496791] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 33 milliseconds 00:22:53.286 0 Kelvin (-273 Celsius) 00:22:53.286 Available Spare: 0% 00:22:53.286 Available Spare Threshold: 0% 00:22:53.286 Life Percentage Used: 0% 00:22:53.286 Data Units Read: 0 00:22:53.286 Data Units Written: 0 00:22:53.286 Host Read Commands: 0 00:22:53.286 Host Write Commands: 0 00:22:53.286 Controller Busy Time: 0 minutes 00:22:53.286 Power Cycles: 0 00:22:53.286 Power On Hours: 0 hours 00:22:53.286 Unsafe Shutdowns: 0 00:22:53.286 Unrecoverable Media Errors: 0 00:22:53.286 Lifetime Error Log Entries: 0 00:22:53.286 Warning Temperature Time: 0 minutes 00:22:53.286 Critical Temperature Time: 0 minutes 00:22:53.286 00:22:53.286 Number of Queues 00:22:53.286 ================ 00:22:53.286 Number of I/O Submission Queues: 127 00:22:53.286 Number of I/O Completion Queues: 127 00:22:53.286 00:22:53.286 Active Namespaces 00:22:53.286 ================= 00:22:53.286 Namespace ID:1 00:22:53.286 Error Recovery Timeout: Unlimited 00:22:53.286 Command Set Identifier: NVM (00h) 00:22:53.286 Deallocate: Supported 00:22:53.286 Deallocated/Unwritten Error: Not Supported 00:22:53.286 Deallocated Read Value: Unknown 00:22:53.286 Deallocate in Write Zeroes: Not Supported 00:22:53.286 Deallocated Guard Field: 0xFFFF 00:22:53.286 Flush: Supported 00:22:53.286 Reservation: Supported 00:22:53.286 Namespace Sharing Capabilities: Multiple Controllers 00:22:53.286 Size (in LBAs): 131072 (0GiB) 00:22:53.286 Capacity (in LBAs): 131072 (0GiB) 00:22:53.286 Utilization (in LBAs): 131072 (0GiB) 00:22:53.286 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:53.286 EUI64: ABCDEF0123456789 00:22:53.286 UUID: fcfa5a3b-d9ba-4212-82a2-6eb02f0239e0 00:22:53.286 Thin Provisioning: Not Supported 00:22:53.286 Per-NS Atomic Units: Yes 00:22:53.286 Atomic Boundary Size (Normal): 0 00:22:53.286 Atomic Boundary Size (PFail): 0 00:22:53.286 Atomic Boundary Offset: 0 00:22:53.286 Maximum Single Source Range Length: 65535 00:22:53.286 Maximum Copy Length: 65535 00:22:53.286 Maximum Source Range Count: 1 00:22:53.286 NGUID/EUI64 Never Reused: No 00:22:53.286 Namespace Write Protected: No 00:22:53.286 Number of LBA Formats: 1 00:22:53.286 Current LBA Format: LBA Format #00 00:22:53.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:53.286 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.286 rmmod nvme_tcp 00:22:53.286 rmmod nvme_fabrics 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 73839 ']' 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 73839 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 73839 ']' 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 73839 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73839 00:22:53.286 killing process with pid 73839 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73839' 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 73839 00:22:53.286 [2024-05-15 09:16:05.615330] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:53.286 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 73839 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:53.544 00:22:53.544 real 0m2.619s 00:22:53.544 user 0m6.909s 00:22:53.544 sys 0m0.810s 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:53.544 ************************************ 00:22:53.544 END TEST nvmf_identify 00:22:53.544 ************************************ 00:22:53.544 09:16:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.544 09:16:05 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.544 09:16:05 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:53.544 09:16:05 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:53.544 09:16:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.544 ************************************ 00:22:53.544 START TEST nvmf_perf 00:22:53.544 ************************************ 00:22:53.544 09:16:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.804 * Looking for test storage... 00:22:53.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.804 09:16:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:53.805 Cannot find device "nvmf_tgt_br" 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.805 Cannot find device "nvmf_tgt_br2" 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:53.805 Cannot find device "nvmf_tgt_br" 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:53.805 Cannot find device "nvmf_tgt_br2" 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:53.805 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.063 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:54.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:22:54.064 00:22:54.064 --- 10.0.0.2 ping statistics --- 00:22:54.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.064 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:54.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:54.064 00:22:54.064 --- 10.0.0.3 ping statistics --- 00:22:54.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.064 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:22:54.064 00:22:54.064 --- 10.0.0.1 ping statistics --- 00:22:54.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.064 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.064 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74046 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74046 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 74046 ']' 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:54.322 09:16:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.322 [2024-05-15 09:16:06.565349] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:22:54.322 [2024-05-15 09:16:06.565816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.322 [2024-05-15 09:16:06.711745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.579 [2024-05-15 09:16:06.812984] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.579 [2024-05-15 09:16:06.813036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.579 [2024-05-15 09:16:06.813046] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.579 [2024-05-15 09:16:06.813054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.579 [2024-05-15 09:16:06.813061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.579 [2024-05-15 09:16:06.813261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.579 [2024-05-15 09:16:06.813377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.579 [2024-05-15 09:16:06.814121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.579 [2024-05-15 09:16:06.814124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:55.144 09:16:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:55.403 09:16:07 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:55.403 09:16:07 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:55.661 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:55.661 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:55.918 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:55.918 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:55.918 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:55.918 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:55.918 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.178 [2024-05-15 09:16:08.593601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.178 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.438 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.438 09:16:08 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.004 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:57.004 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:57.004 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.264 [2024-05-15 09:16:09.662856] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:57.264 [2024-05-15 09:16:09.663150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.264 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.526 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:57.526 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:57.526 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:57.526 09:16:09 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:58.937 Initializing NVMe Controllers 00:22:58.937 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:58.937 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:58.937 Initialization complete. Launching workers. 00:22:58.937 ======================================================== 00:22:58.937 Latency(us) 00:22:58.937 Device Information : IOPS MiB/s Average min max 00:22:58.937 PCIE (0000:00:10.0) NSID 1 from core 0: 22552.10 88.09 1419.79 333.66 15111.88 00:22:58.937 ======================================================== 00:22:58.937 Total : 22552.10 88.09 1419.79 333.66 15111.88 00:22:58.937 00:22:58.937 09:16:11 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.361 Initializing NVMe Controllers 00:23:00.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.361 Initialization complete. Launching workers. 00:23:00.361 ======================================================== 00:23:00.361 Latency(us) 00:23:00.361 Device Information : IOPS MiB/s Average min max 00:23:00.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3788.97 14.80 263.65 93.52 19237.58 00:23:00.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 39.00 0.15 25768.30 16232.89 31931.08 00:23:00.361 ======================================================== 00:23:00.361 Total : 3827.97 14.95 523.49 93.52 31931.08 00:23:00.361 00:23:00.361 09:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:01.733 Initializing NVMe Controllers 00:23:01.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.733 Initialization complete. Launching workers. 00:23:01.733 ======================================================== 00:23:01.733 Latency(us) 00:23:01.733 Device Information : IOPS MiB/s Average min max 00:23:01.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8953.06 34.97 3574.72 447.50 16040.04 00:23:01.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1262.73 4.93 25643.09 18334.86 27159.18 00:23:01.733 ======================================================== 00:23:01.733 Total : 10215.78 39.91 6302.49 447.50 27159.18 00:23:01.733 00:23:01.733 09:16:14 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:01.733 09:16:14 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.012 Initializing NVMe Controllers 00:23:05.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.012 Controller IO queue size 128, less than required. 00:23:05.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.012 Controller IO queue size 128, less than required. 00:23:05.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:05.012 Initialization complete. Launching workers. 00:23:05.012 ======================================================== 00:23:05.012 Latency(us) 00:23:05.012 Device Information : IOPS MiB/s Average min max 00:23:05.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1945.49 486.37 66752.55 43309.49 135161.02 00:23:05.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 254.00 63.50 576140.97 263242.37 914876.21 00:23:05.012 ======================================================== 00:23:05.012 Total : 2199.49 549.87 125577.13 43309.49 914876.21 00:23:05.012 00:23:05.012 09:16:17 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:05.012 Initializing NVMe Controllers 00:23:05.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.012 Controller IO queue size 128, less than required. 00:23:05.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.012 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:05.012 Controller IO queue size 128, less than required. 00:23:05.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.012 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:05.012 WARNING: Some requested NVMe devices were skipped 00:23:05.012 No valid NVMe controllers or AIO or URING devices found 00:23:05.012 09:16:17 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:08.313 Initializing NVMe Controllers 00:23:08.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.313 Controller IO queue size 128, less than required. 00:23:08.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:08.313 Controller IO queue size 128, less than required. 00:23:08.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:08.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:08.313 Initialization complete. Launching workers. 00:23:08.313 00:23:08.313 ==================== 00:23:08.313 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:08.313 TCP transport: 00:23:08.313 polls: 17014 00:23:08.313 idle_polls: 0 00:23:08.313 sock_completions: 17014 00:23:08.313 nvme_completions: 6381 00:23:08.313 submitted_requests: 9554 00:23:08.313 queued_requests: 1 00:23:08.313 00:23:08.313 ==================== 00:23:08.313 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:08.313 TCP transport: 00:23:08.313 polls: 15109 00:23:08.313 idle_polls: 0 00:23:08.313 sock_completions: 15109 00:23:08.313 nvme_completions: 5593 00:23:08.313 submitted_requests: 8404 00:23:08.313 queued_requests: 1 00:23:08.313 ======================================================== 00:23:08.313 Latency(us) 00:23:08.313 Device Information : IOPS MiB/s Average min max 00:23:08.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1593.41 398.35 82747.46 53749.03 144948.73 00:23:08.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1396.61 349.15 92405.48 37995.92 189719.20 00:23:08.313 ======================================================== 00:23:08.313 Total : 2990.02 747.50 87258.62 37995.92 189719.20 00:23:08.313 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.313 rmmod nvme_tcp 00:23:08.313 rmmod nvme_fabrics 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74046 ']' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74046 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 74046 ']' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 74046 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74046 00:23:08.313 killing process with pid 74046 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74046' 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 74046 00:23:08.313 [2024-05-15 09:16:20.660705] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:08.313 09:16:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 74046 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.881 09:16:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.140 09:16:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:09.140 00:23:09.140 real 0m15.379s 00:23:09.140 user 0m56.113s 00:23:09.140 sys 0m4.533s 00:23:09.140 09:16:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:09.140 09:16:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 ************************************ 00:23:09.140 END TEST nvmf_perf 00:23:09.140 ************************************ 00:23:09.140 09:16:21 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:09.140 09:16:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:09.140 09:16:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:09.140 09:16:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.140 ************************************ 00:23:09.140 START TEST nvmf_fio_host 00:23:09.140 ************************************ 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:09.140 * Looking for test storage... 00:23:09.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.140 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:09.141 Cannot find device "nvmf_tgt_br" 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:23:09.141 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:09.398 Cannot find device "nvmf_tgt_br2" 00:23:09.398 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:23:09.398 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:09.398 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:09.398 Cannot find device "nvmf_tgt_br" 00:23:09.398 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:23:09.398 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:09.399 Cannot find device "nvmf_tgt_br2" 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:09.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:09.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:09.399 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:09.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:23:09.657 00:23:09.657 --- 10.0.0.2 ping statistics --- 00:23:09.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.657 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:09.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:09.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:23:09.657 00:23:09.657 --- 10.0.0.3 ping statistics --- 00:23:09.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.657 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:09.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:23:09.657 00:23:09.657 --- 10.0.0.1 ping statistics --- 00:23:09.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.657 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:09.657 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=74464 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 74464 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 74464 ']' 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:09.658 09:16:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.658 [2024-05-15 09:16:22.001006] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:23:09.658 [2024-05-15 09:16:22.001268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.917 [2024-05-15 09:16:22.141560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.917 [2024-05-15 09:16:22.272754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.917 [2024-05-15 09:16:22.272862] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.917 [2024-05-15 09:16:22.272878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.917 [2024-05-15 09:16:22.272892] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.917 [2024-05-15 09:16:22.272904] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.917 [2024-05-15 09:16:22.273151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.917 [2024-05-15 09:16:22.273336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.917 [2024-05-15 09:16:22.273993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.917 [2024-05-15 09:16:22.274000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 [2024-05-15 09:16:23.087207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 Malloc1 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 [2024-05-15 09:16:23.179272] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:10.850 [2024-05-15 09:16:23.179535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:10.850 09:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:11.107 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:11.107 fio-3.35 00:23:11.107 Starting 1 thread 00:23:13.675 00:23:13.675 test: (groupid=0, jobs=1): err= 0: pid=74525: Wed May 15 09:16:25 2024 00:23:13.675 read: IOPS=9359, BW=36.6MiB/s (38.3MB/s)(73.3MiB/2006msec) 00:23:13.675 slat (nsec): min=1616, max=367156, avg=2264.36, stdev=3635.02 00:23:13.675 clat (usec): min=2935, max=13446, avg=7138.51, stdev=585.44 00:23:13.675 lat (usec): min=2970, max=13448, avg=7140.78, stdev=585.23 00:23:13.675 clat percentiles (usec): 00:23:13.675 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:23:13.675 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:23:13.675 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8094], 00:23:13.675 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10552], 99.95th=[11994], 00:23:13.675 | 99.99th=[13435] 00:23:13.675 bw ( KiB/s): min=36920, max=37736, per=99.93%, avg=37414.00, stdev=382.69, samples=4 00:23:13.675 iops : min= 9230, max= 9434, avg=9353.50, stdev=95.67, samples=4 00:23:13.675 write: IOPS=9361, BW=36.6MiB/s (38.3MB/s)(73.4MiB/2006msec); 0 zone resets 00:23:13.675 slat (nsec): min=1686, max=251236, avg=2326.39, stdev=2177.50 00:23:13.675 clat (usec): min=2796, max=12741, avg=6491.10, stdev=535.04 00:23:13.675 lat (usec): min=2812, max=12743, avg=6493.43, stdev=534.92 00:23:13.675 clat percentiles (usec): 00:23:13.675 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:23:13.675 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:23:13.675 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7373], 00:23:13.675 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[10290], 99.95th=[11863], 00:23:13.675 | 99.99th=[12649] 00:23:13.675 bw ( KiB/s): min=36664, max=38224, per=99.99%, avg=37442.00, stdev=655.88, samples=4 00:23:13.675 iops : min= 9166, max= 9556, avg=9360.50, stdev=163.97, samples=4 00:23:13.675 lat (msec) : 4=0.06%, 10=99.82%, 20=0.13% 00:23:13.675 cpu : usr=72.62%, sys=21.90%, ctx=39, majf=0, minf=3 00:23:13.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:13.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:13.675 issued rwts: total=18776,18779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:13.675 00:23:13.675 Run status group 0 (all jobs): 00:23:13.675 READ: bw=36.6MiB/s (38.3MB/s), 36.6MiB/s-36.6MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.9MB), run=2006-2006msec 00:23:13.675 WRITE: bw=36.6MiB/s (38.3MB/s), 36.6MiB/s-36.6MiB/s (38.3MB/s-38.3MB/s), io=73.4MiB (76.9MB), run=2006-2006msec 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:13.675 09:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:13.675 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:13.675 fio-3.35 00:23:13.675 Starting 1 thread 00:23:16.259 00:23:16.259 test: (groupid=0, jobs=1): err= 0: pid=74573: Wed May 15 09:16:28 2024 00:23:16.259 read: IOPS=8668, BW=135MiB/s (142MB/s)(272MiB/2007msec) 00:23:16.259 slat (usec): min=2, max=126, avg= 3.57, stdev= 1.91 00:23:16.259 clat (usec): min=2027, max=16776, avg=8147.88, stdev=2492.72 00:23:16.259 lat (usec): min=2031, max=16779, avg=8151.45, stdev=2492.82 00:23:16.259 clat percentiles (usec): 00:23:16.259 | 1.00th=[ 3785], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5932], 00:23:16.259 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:23:16.259 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11338], 95.00th=[12649], 00:23:16.259 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16319], 99.95th=[16581], 00:23:16.259 | 99.99th=[16712] 00:23:16.259 bw ( KiB/s): min=59744, max=81824, per=51.73%, avg=71744.00, stdev=11341.58, samples=4 00:23:16.259 iops : min= 3734, max= 5114, avg=4484.00, stdev=708.85, samples=4 00:23:16.259 write: IOPS=5138, BW=80.3MiB/s (84.2MB/s)(146MiB/1823msec); 0 zone resets 00:23:16.259 slat (usec): min=31, max=334, avg=38.82, stdev= 9.05 00:23:16.259 clat (usec): min=3026, max=19158, avg=11467.73, stdev=2211.36 00:23:16.259 lat (usec): min=3062, max=19195, avg=11506.55, stdev=2213.58 00:23:16.259 clat percentiles (usec): 00:23:16.259 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:23:16.259 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:23:16.259 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14353], 95.00th=[15533], 00:23:16.259 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:23:16.259 | 99.99th=[19268] 00:23:16.259 bw ( KiB/s): min=63744, max=85696, per=91.15%, avg=74944.00, stdev=11077.36, samples=4 00:23:16.259 iops : min= 3984, max= 5356, avg=4684.00, stdev=692.34, samples=4 00:23:16.259 lat (msec) : 4=1.25%, 10=59.14%, 20=39.61% 00:23:16.259 cpu : usr=80.01%, sys=15.40%, ctx=21, majf=0, minf=28 00:23:16.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:16.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:16.259 issued rwts: total=17397,9368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:16.259 00:23:16.259 Run status group 0 (all jobs): 00:23:16.259 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=272MiB (285MB), run=2007-2007msec 00:23:16.259 WRITE: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=146MiB (153MB), run=1823-1823msec 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.259 rmmod nvme_tcp 00:23:16.259 rmmod nvme_fabrics 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74464 ']' 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74464 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 74464 ']' 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 74464 00:23:16.259 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74464 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:16.260 killing process with pid 74464 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74464' 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 74464 00:23:16.260 [2024-05-15 09:16:28.290212] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 74464 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:16.260 00:23:16.260 real 0m7.173s 00:23:16.260 user 0m27.617s 00:23:16.260 sys 0m2.336s 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:16.260 ************************************ 00:23:16.260 09:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.260 END TEST nvmf_fio_host 00:23:16.260 ************************************ 00:23:16.260 09:16:28 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:16.260 09:16:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:16.260 09:16:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:16.260 09:16:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.260 ************************************ 00:23:16.260 START TEST nvmf_failover 00:23:16.260 ************************************ 00:23:16.260 09:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:16.519 * Looking for test storage... 00:23:16.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:16.519 Cannot find device "nvmf_tgt_br" 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:16.519 Cannot find device "nvmf_tgt_br2" 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:16.519 Cannot find device "nvmf_tgt_br" 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:16.519 Cannot find device "nvmf_tgt_br2" 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:16.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:16.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:16.519 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:16.778 09:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:16.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:16.778 00:23:16.778 --- 10.0.0.2 ping statistics --- 00:23:16.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.778 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:16.778 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:16.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:16.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:23:16.778 00:23:16.778 --- 10.0.0.3 ping statistics --- 00:23:16.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.778 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:16.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:23:16.779 00:23:16.779 --- 10.0.0.1 ping statistics --- 00:23:16.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.779 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=74781 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 74781 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 74781 ']' 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:16.779 09:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.779 [2024-05-15 09:16:29.173237] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:23:16.779 [2024-05-15 09:16:29.173848] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.037 [2024-05-15 09:16:29.311860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:17.037 [2024-05-15 09:16:29.420982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.037 [2024-05-15 09:16:29.421189] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.037 [2024-05-15 09:16:29.421346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.038 [2024-05-15 09:16:29.421397] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.038 [2024-05-15 09:16:29.421425] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.038 [2024-05-15 09:16:29.421584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.038 [2024-05-15 09:16:29.422339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.038 [2024-05-15 09:16:29.422340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.971 09:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:17.972 09:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:23:17.972 09:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.972 09:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:17.972 09:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.972 09:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.972 09:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:18.229 [2024-05-15 09:16:30.466320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.229 09:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:18.488 Malloc0 00:23:18.488 09:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.488 09:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.745 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.004 [2024-05-15 09:16:31.323294] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:19.004 [2024-05-15 09:16:31.323888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.004 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.263 [2024-05-15 09:16:31.567742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.263 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:19.521 [2024-05-15 09:16:31.872033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74849 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74849 /var/tmp/bdevperf.sock 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 74849 ']' 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:19.521 09:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.962 09:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:20.962 09:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:23:20.962 09:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.962 NVMe0n1 00:23:20.962 09:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.219 00:23:21.476 09:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74873 00:23:21.476 09:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.476 09:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:22.410 09:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.669 09:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:25.954 09:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.954 00:23:25.954 09:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:26.212 09:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:29.490 09:16:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.490 [2024-05-15 09:16:41.898211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.490 09:16:41 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:30.865 09:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:30.865 09:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 74873 00:23:37.504 0 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 74849 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 74849 ']' 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 74849 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74849 00:23:37.504 killing process with pid 74849 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74849' 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 74849 00:23:37.504 09:16:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 74849 00:23:37.504 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:37.504 [2024-05-15 09:16:31.953787] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:23:37.504 [2024-05-15 09:16:31.954013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:23:37.504 [2024-05-15 09:16:32.097385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.504 [2024-05-15 09:16:32.201501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.504 Running I/O for 15 seconds... 00:23:37.504 [2024-05-15 09:16:34.980402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.980823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.980866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.980908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.980952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.980998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.504 [2024-05-15 09:16:34.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.504 [2024-05-15 09:16:34.981973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.504 [2024-05-15 09:16:34.981995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.982661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.982957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.982978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.983021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.505 [2024-05-15 09:16:34.983754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.983798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.983841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.505 [2024-05-15 09:16:34.983872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.505 [2024-05-15 09:16:34.983893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.983916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.983936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.983959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.983980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.984025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.984068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.984111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.984824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.984867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.984910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.984953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.984976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.985004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.985093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.985137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.506 [2024-05-15 09:16:34.985180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.506 [2024-05-15 09:16:34.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.506 [2024-05-15 09:16:34.985700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.507 [2024-05-15 09:16:34.985720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.985743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.507 [2024-05-15 09:16:34.985763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.985788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.507 [2024-05-15 09:16:34.985809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.985831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.507 [2024-05-15 09:16:34.985852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.985874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8d30 is same with the state(5) to be set 00:23:37.507 [2024-05-15 09:16:34.985900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.985915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.985931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91568 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.985952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.985982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.985998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91896 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91904 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91912 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91920 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91928 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91936 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91944 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.507 [2024-05-15 09:16:34.986512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.507 [2024-05-15 09:16:34.986527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91952 len:8 PRP1 0x0 PRP2 0x0 00:23:37.507 [2024-05-15 09:16:34.986558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986627] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d8d30 was disconnected and freed. reset controller. 00:23:37.507 [2024-05-15 09:16:34.986651] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:37.507 [2024-05-15 09:16:34.986724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.507 [2024-05-15 09:16:34.986748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.507 [2024-05-15 09:16:34.986790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.507 [2024-05-15 09:16:34.986842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.507 [2024-05-15 09:16:34.986884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:34.986904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.507 [2024-05-15 09:16:34.986973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16690f0 (9): Bad file descriptor 00:23:37.507 [2024-05-15 09:16:34.991556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.507 [2024-05-15 09:16:35.044685] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.507 [2024-05-15 09:16:38.588089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.507 [2024-05-15 09:16:38.588655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.507 [2024-05-15 09:16:38.588673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.588688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.588978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.588994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.508 [2024-05-15 09:16:38.589772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.508 [2024-05-15 09:16:38.589789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.508 [2024-05-15 09:16:38.589805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.589823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.589838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.589856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.589872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.589889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.589905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.589922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.589955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.589970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.589987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.509 [2024-05-15 09:16:38.590722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.590971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.590986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.591004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.591043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.591058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.591075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.591090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.591108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.591124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.591141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.591157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.509 [2024-05-15 09:16:38.591174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.509 [2024-05-15 09:16:38.591190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.510 [2024-05-15 09:16:38.591671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.591967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.591984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.510 [2024-05-15 09:16:38.592198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d9c60 is same with the state(5) to be set 00:23:37.510 [2024-05-15 09:16:38.592234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115672 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116192 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116200 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116208 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116216 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116224 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.510 [2024-05-15 09:16:38.592573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.510 [2024-05-15 09:16:38.592584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.510 [2024-05-15 09:16:38.592596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116232 len:8 PRP1 0x0 PRP2 0x0 00:23:37.510 [2024-05-15 09:16:38.592611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.511 [2024-05-15 09:16:38.592638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.511 [2024-05-15 09:16:38.592650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116240 len:8 PRP1 0x0 PRP2 0x0 00:23:37.511 [2024-05-15 09:16:38.592665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.511 [2024-05-15 09:16:38.592692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.511 [2024-05-15 09:16:38.592703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116248 len:8 PRP1 0x0 PRP2 0x0 00:23:37.511 [2024-05-15 09:16:38.592718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592775] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d9c60 was disconnected and freed. reset controller. 00:23:37.511 [2024-05-15 09:16:38.592794] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:37.511 [2024-05-15 09:16:38.592856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:38.592874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:38.592906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:38.592937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:38.592969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:38.592984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.511 [2024-05-15 09:16:38.593043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16690f0 (9): Bad file descriptor 00:23:37.511 [2024-05-15 09:16:38.596408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.511 [2024-05-15 09:16:38.633708] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.511 [2024-05-15 09:16:43.127291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:43.127390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:43.127449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:43.127498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.511 [2024-05-15 09:16:43.127562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16690f0 is same with the state(5) to be set 00:23:37.511 [2024-05-15 09:16:43.127660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.127975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.127990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.511 [2024-05-15 09:16:43.128490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.511 [2024-05-15 09:16:43.128523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.511 [2024-05-15 09:16:43.128551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.128990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.512 [2024-05-15 09:16:43.129595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.512 [2024-05-15 09:16:43.129758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.512 [2024-05-15 09:16:43.129775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.129791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.129809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.129824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.129842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.129857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.129874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.129890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.129907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.129933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.129950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.129966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.129983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.129999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.513 [2024-05-15 09:16:43.130671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.130974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.130991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.131006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.131027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.131053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.131078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.131098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.131118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.131137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.131176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.513 [2024-05-15 09:16:43.131197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.513 [2024-05-15 09:16:43.131216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.131261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.514 [2024-05-15 09:16:43.131897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.131930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.131963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.131980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.131995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.132013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.132029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.132046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.132061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.132078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.132094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.132112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.514 [2024-05-15 09:16:43.132128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.132180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:37.514 [2024-05-15 09:16:43.132194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:37.514 [2024-05-15 09:16:43.132206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:23:37.514 [2024-05-15 09:16:43.132222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.514 [2024-05-15 09:16:43.132290] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x176dc90 was disconnected and freed. reset controller. 00:23:37.514 [2024-05-15 09:16:43.132309] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:37.514 [2024-05-15 09:16:43.132325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.514 [2024-05-15 09:16:43.135837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.514 [2024-05-15 09:16:43.135893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16690f0 (9): Bad file descriptor 00:23:37.514 [2024-05-15 09:16:43.175157] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.514 00:23:37.514 Latency(us) 00:23:37.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:37.514 Verification LBA range: start 0x0 length 0x4000 00:23:37.514 NVMe0n1 : 15.01 9520.22 37.19 225.77 0.00 13103.53 639.76 15354.15 00:23:37.514 =================================================================================================================== 00:23:37.514 Total : 9520.22 37.19 225.77 0.00 13103.53 639.76 15354.15 00:23:37.514 Received shutdown signal, test time was about 15.000000 seconds 00:23:37.514 00:23:37.514 Latency(us) 00:23:37.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.514 =================================================================================================================== 00:23:37.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:37.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75050 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75050 /var/tmp/bdevperf.sock 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 75050 ']' 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:37.514 09:16:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.080 09:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:38.080 09:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:23:38.080 09:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:38.080 [2024-05-15 09:16:50.474522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:38.080 09:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:38.338 [2024-05-15 09:16:50.746791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:38.338 09:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.903 NVMe0n1 00:23:38.903 09:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:39.160 00:23:39.160 09:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:39.736 00:23:39.736 09:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.736 09:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:39.992 09:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.248 09:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:43.533 09:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.533 09:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:43.533 09:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75134 00:23:43.533 09:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.533 09:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75134 00:23:44.907 0 00:23:44.907 09:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:44.907 [2024-05-15 09:16:49.179809] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:23:44.907 [2024-05-15 09:16:49.179954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75050 ] 00:23:44.907 [2024-05-15 09:16:49.329040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.908 [2024-05-15 09:16:49.440752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.908 [2024-05-15 09:16:52.517731] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:44.908 [2024-05-15 09:16:52.517859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.908 [2024-05-15 09:16:52.517883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.908 [2024-05-15 09:16:52.517903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.908 [2024-05-15 09:16:52.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.908 [2024-05-15 09:16:52.517935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.908 [2024-05-15 09:16:52.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.908 [2024-05-15 09:16:52.517967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.908 [2024-05-15 09:16:52.517982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.908 [2024-05-15 09:16:52.517998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.908 [2024-05-15 09:16:52.518048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.908 [2024-05-15 09:16:52.518076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19040f0 (9): Bad file descriptor 00:23:44.908 [2024-05-15 09:16:52.537377] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:44.908 Running I/O for 1 seconds... 00:23:44.908 00:23:44.908 Latency(us) 00:23:44.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.908 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:44.908 Verification LBA range: start 0x0 length 0x4000 00:23:44.908 NVMe0n1 : 1.01 8907.51 34.79 0.00 0.00 14282.97 1614.99 15042.07 00:23:44.908 =================================================================================================================== 00:23:44.908 Total : 8907.51 34.79 0.00 0.00 14282.97 1614.99 15042.07 00:23:44.908 09:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:44.908 09:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.908 09:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.167 09:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:45.167 09:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:45.732 09:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.990 09:16:58 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75050 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 75050 ']' 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 75050 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75050 00:23:49.275 killing process with pid 75050 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75050' 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 75050 00:23:49.275 09:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 75050 00:23:49.533 09:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:49.533 09:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.790 rmmod nvme_tcp 00:23:49.790 rmmod nvme_fabrics 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 74781 ']' 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 74781 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 74781 ']' 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 74781 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74781 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74781' 00:23:49.790 killing process with pid 74781 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 74781 00:23:49.790 [2024-05-15 09:17:02.099624] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:49.790 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 74781 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:50.048 00:23:50.048 real 0m33.765s 00:23:50.048 user 2m10.566s 00:23:50.048 sys 0m6.507s 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:50.048 09:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:50.048 ************************************ 00:23:50.048 END TEST nvmf_failover 00:23:50.048 ************************************ 00:23:50.048 09:17:02 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:50.048 09:17:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:50.048 09:17:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:50.048 09:17:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.048 ************************************ 00:23:50.048 START TEST nvmf_host_discovery 00:23:50.048 ************************************ 00:23:50.048 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:50.307 * Looking for test storage... 00:23:50.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:50.307 Cannot find device "nvmf_tgt_br" 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:50.307 Cannot find device "nvmf_tgt_br2" 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:50.307 Cannot find device "nvmf_tgt_br" 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:50.307 Cannot find device "nvmf_tgt_br2" 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:50.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:50.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:50.307 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:50.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:23:50.566 00:23:50.566 --- 10.0.0.2 ping statistics --- 00:23:50.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.566 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:50.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:50.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:23:50.566 00:23:50.566 --- 10.0.0.3 ping statistics --- 00:23:50.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.566 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:50.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:50.566 00:23:50.566 --- 10.0.0.1 ping statistics --- 00:23:50.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.566 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75406 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75406 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 75406 ']' 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:50.566 09:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.566 [2024-05-15 09:17:02.990493] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:23:50.566 [2024-05-15 09:17:02.991615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.824 [2024-05-15 09:17:03.128907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.824 [2024-05-15 09:17:03.265972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.824 [2024-05-15 09:17:03.266277] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.824 [2024-05-15 09:17:03.266421] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.824 [2024-05-15 09:17:03.266495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.824 [2024-05-15 09:17:03.266536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.824 [2024-05-15 09:17:03.266688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 [2024-05-15 09:17:03.938937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 [2024-05-15 09:17:03.946885] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:51.789 [2024-05-15 09:17:03.947281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 null0 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 null1 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75438 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75438 /tmp/host.sock 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 75438 ']' 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:51.789 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:51.789 09:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 [2024-05-15 09:17:04.019482] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:23:51.789 [2024-05-15 09:17:04.019890] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75438 ] 00:23:51.789 [2024-05-15 09:17:04.161376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.047 [2024-05-15 09:17:04.281618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.613 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.871 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.872 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 [2024-05-15 09:17:05.347977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:23:53.131 09:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:23:53.699 [2024-05-15 09:17:06.053504] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:53.699 [2024-05-15 09:17:06.053765] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:53.699 [2024-05-15 09:17:06.053833] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:53.699 [2024-05-15 09:17:06.059533] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:53.699 [2024-05-15 09:17:06.115890] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:53.699 [2024-05-15 09:17:06.116134] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.266 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 [2024-05-15 09:17:06.865976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.526 [2024-05-15 09:17:06.867244] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.526 [2024-05-15 09:17:06.867441] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:54.526 [2024-05-15 09:17:06.873230] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.526 [2024-05-15 09:17:06.937698] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:54.526 [2024-05-15 09:17:06.937868] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:54.526 [2024-05-15 09:17:06.937962] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.526 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.787 09:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.787 [2024-05-15 09:17:07.077855] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.787 [2024-05-15 09:17:07.078037] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:54.787 [2024-05-15 09:17:07.083691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.787 [2024-05-15 09:17:07.083737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.787 [2024-05-15 09:17:07.083750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.787 [2024-05-15 09:17:07.083761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.787 [2024-05-15 09:17:07.083772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.787 [2024-05-15 09:17:07.083783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.787 [2024-05-15 09:17:07.083794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.787 [2024-05-15 09:17:07.083804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.787 [2024-05-15 09:17:07.083814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1420 is same with the state(5) to be set 00:23:54.787 [2024-05-15 09:17:07.083880] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:54.787 [2024-05-15 09:17:07.083899] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.787 [2024-05-15 09:17:07.083954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1420 (9): Bad file descriptor 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.787 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.788 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.046 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.047 09:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.984 [2024-05-15 09:17:08.425432] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:55.984 [2024-05-15 09:17:08.425687] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:55.984 [2024-05-15 09:17:08.425759] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.245 [2024-05-15 09:17:08.431468] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:56.246 [2024-05-15 09:17:08.491060] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:56.246 [2024-05-15 09:17:08.491346] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.246 request: 00:23:56.246 { 00:23:56.246 "name": "nvme", 00:23:56.246 "trtype": "tcp", 00:23:56.246 "traddr": "10.0.0.2", 00:23:56.246 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:56.246 "adrfam": "ipv4", 00:23:56.246 "trsvcid": "8009", 00:23:56.246 "wait_for_attach": true, 00:23:56.246 "method": "bdev_nvme_start_discovery", 00:23:56.246 "req_id": 1 00:23:56.246 } 00:23:56.246 Got JSON-RPC error response 00:23:56.246 response: 00:23:56.246 { 00:23:56.246 "code": -17, 00:23:56.246 "message": "File exists" 00:23:56.246 } 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.246 request: 00:23:56.246 { 00:23:56.246 "name": "nvme_second", 00:23:56.246 "trtype": "tcp", 00:23:56.246 "traddr": "10.0.0.2", 00:23:56.246 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:56.246 "adrfam": "ipv4", 00:23:56.246 "trsvcid": "8009", 00:23:56.246 "wait_for_attach": true, 00:23:56.246 "method": "bdev_nvme_start_discovery", 00:23:56.246 "req_id": 1 00:23:56.246 } 00:23:56.246 Got JSON-RPC error response 00:23:56.246 response: 00:23:56.246 { 00:23:56.246 "code": -17, 00:23:56.246 "message": "File exists" 00:23:56.246 } 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.246 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.505 09:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.441 [2024-05-15 09:17:09.724828] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.441 [2024-05-15 09:17:09.725214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.441 [2024-05-15 09:17:09.725294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.441 [2024-05-15 09:17:09.725386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0df90 with addr=10.0.0.2, port=8010 00:23:57.441 [2024-05-15 09:17:09.725564] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:57.441 [2024-05-15 09:17:09.725653] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:57.441 [2024-05-15 09:17:09.725690] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:58.377 [2024-05-15 09:17:10.724863] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.377 [2024-05-15 09:17:10.725215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.377 [2024-05-15 09:17:10.725299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.377 [2024-05-15 09:17:10.725396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0d950 with addr=10.0.0.2, port=8010 00:23:58.377 [2024-05-15 09:17:10.725472] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:58.377 [2024-05-15 09:17:10.725566] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:58.377 [2024-05-15 09:17:10.725700] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:59.312 [2024-05-15 09:17:11.724707] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:59.312 request: 00:23:59.312 { 00:23:59.312 "name": "nvme_second", 00:23:59.312 "trtype": "tcp", 00:23:59.312 "traddr": "10.0.0.2", 00:23:59.312 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:59.312 "adrfam": "ipv4", 00:23:59.312 "trsvcid": "8010", 00:23:59.312 "attach_timeout_ms": 3000, 00:23:59.312 "method": "bdev_nvme_start_discovery", 00:23:59.312 "req_id": 1 00:23:59.312 } 00:23:59.312 Got JSON-RPC error response 00:23:59.312 response: 00:23:59.312 { 00:23:59.312 "code": -110, 00:23:59.312 "message": "Connection timed out" 00:23:59.312 } 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.312 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75438 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.571 rmmod nvme_tcp 00:23:59.571 rmmod nvme_fabrics 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75406 ']' 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75406 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 75406 ']' 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 75406 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75406 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:59.571 killing process with pid 75406 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75406' 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 75406 00:23:59.571 09:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 75406 00:23:59.571 [2024-05-15 09:17:11.934577] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:59.829 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.829 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.829 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:59.830 ************************************ 00:23:59.830 END TEST nvmf_host_discovery 00:23:59.830 ************************************ 00:23:59.830 00:23:59.830 real 0m9.738s 00:23:59.830 user 0m18.166s 00:23:59.830 sys 0m2.295s 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.830 09:17:12 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:59.830 09:17:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:59.830 09:17:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:59.830 09:17:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.830 ************************************ 00:23:59.830 START TEST nvmf_host_multipath_status 00:23:59.830 ************************************ 00:23:59.830 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:00.089 * Looking for test storage... 00:24:00.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:00.089 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:00.090 Cannot find device "nvmf_tgt_br" 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.090 Cannot find device "nvmf_tgt_br2" 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:00.090 Cannot find device "nvmf_tgt_br" 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:00.090 Cannot find device "nvmf_tgt_br2" 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:00.090 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:00.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:00.350 00:24:00.350 --- 10.0.0.2 ping statistics --- 00:24:00.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.350 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:00.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:00.350 00:24:00.350 --- 10.0.0.3 ping statistics --- 00:24:00.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.350 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:00.350 00:24:00.350 --- 10.0.0.1 ping statistics --- 00:24:00.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.350 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.350 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=75883 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 75883 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 75883 ']' 00:24:00.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:00.610 09:17:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:00.610 [2024-05-15 09:17:12.885487] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:24:00.610 [2024-05-15 09:17:12.885902] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.610 [2024-05-15 09:17:13.041889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:00.871 [2024-05-15 09:17:13.143464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.871 [2024-05-15 09:17:13.143693] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.871 [2024-05-15 09:17:13.143821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.871 [2024-05-15 09:17:13.143875] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.871 [2024-05-15 09:17:13.143973] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.871 [2024-05-15 09:17:13.144136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.871 [2024-05-15 09:17:13.144137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.452 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:01.452 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:24:01.452 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.452 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:01.452 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:01.725 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.725 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75883 00:24:01.725 09:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:01.983 [2024-05-15 09:17:14.182351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.983 09:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:02.241 Malloc0 00:24:02.241 09:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:02.499 09:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.756 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.014 [2024-05-15 09:17:15.309529] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:03.014 [2024-05-15 09:17:15.310082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.014 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:03.272 [2024-05-15 09:17:15.593939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75939 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75939 /var/tmp/bdevperf.sock 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 75939 ']' 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:03.272 09:17:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:04.206 09:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:04.206 09:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:24:04.206 09:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:04.501 09:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:05.069 Nvme0n1 00:24:05.069 09:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:05.327 Nvme0n1 00:24:05.327 09:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:05.327 09:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:07.228 09:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:07.228 09:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:07.794 09:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.795 09:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.171 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.430 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.430 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.430 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.430 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.688 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.688 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.688 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.688 09:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.947 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.947 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.947 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.947 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.244 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.244 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:10.244 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.244 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.503 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.503 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:10.503 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.503 09:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:11.071 09:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:12.007 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:12.007 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:12.007 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.007 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.267 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.267 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:12.267 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.267 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.526 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.526 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.526 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.526 09:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.785 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.785 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.785 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.785 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.043 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.043 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:13.043 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.043 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.301 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.301 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:13.301 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.301 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.560 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.560 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:13.560 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:13.560 09:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:14.127 09:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:15.063 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:15.063 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.063 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.063 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:15.321 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.321 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:15.321 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.321 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:15.580 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.580 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:15.580 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.580 09:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:15.839 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.839 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:15.839 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.839 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.149 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.149 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.149 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.149 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.408 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.408 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.408 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.408 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.666 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.666 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:16.666 09:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:16.925 09:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:16.925 09:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:18.300 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.301 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.558 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.558 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.558 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.558 09:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.816 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.816 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.816 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.816 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.124 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.124 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.124 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.124 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.398 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.398 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:19.398 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.398 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.657 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.657 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:19.657 09:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:19.916 09:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:19.916 09:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.287 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.544 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.544 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.544 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.544 09:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.801 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.801 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.801 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.802 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.060 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.060 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:22.060 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.060 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.627 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.627 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:22.627 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.627 09:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.886 09:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.886 09:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:22.886 09:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:22.886 09:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:23.144 09:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.515 09:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.772 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.772 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.772 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.772 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.062 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.062 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.062 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.062 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.335 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.335 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:25.335 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.335 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.592 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.592 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.592 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.592 09:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.850 09:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.850 09:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:26.416 09:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:26.416 09:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:26.416 09:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:26.673 09:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:28.049 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.373 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.373 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:28.373 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.373 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.631 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.631 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.631 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.631 09:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.888 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.888 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.888 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.888 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.146 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.146 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:29.146 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.146 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.404 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.404 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:29.404 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:29.662 09:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:29.920 09:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:30.854 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:30.855 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:30.855 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:30.855 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.113 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.113 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:31.113 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.113 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:31.371 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.371 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:31.371 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.371 09:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.630 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.630 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.630 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.630 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:31.888 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.888 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:31.888 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:31.888 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.146 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.146 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.146 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.146 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.404 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.404 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:32.404 09:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:32.662 09:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:33.233 09:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.228 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.487 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.487 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.487 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.487 09:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.745 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.745 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.745 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.745 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.312 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.570 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.570 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:35.570 09:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.828 09:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:36.087 09:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:37.021 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:37.021 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:37.021 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.021 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.290 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.290 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:37.290 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.290 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.577 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.577 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.577 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.577 09:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.835 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.835 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.835 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.835 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.093 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.093 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.093 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.093 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.351 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.351 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:38.351 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.351 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75939 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 75939 ']' 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 75939 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75939 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75939' 00:24:38.609 killing process with pid 75939 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 75939 00:24:38.609 09:17:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 75939 00:24:38.609 Connection closed with partial response: 00:24:38.609 00:24:38.609 00:24:38.871 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75939 00:24:38.871 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:38.871 [2024-05-15 09:17:15.670102] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:24:38.871 [2024-05-15 09:17:15.670226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75939 ] 00:24:38.871 [2024-05-15 09:17:15.816921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.871 [2024-05-15 09:17:15.933940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.871 Running I/O for 90 seconds... 00:24:38.871 [2024-05-15 09:17:32.102528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.871 [2024-05-15 09:17:32.103154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.871 [2024-05-15 09:17:32.103307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.103381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.103450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.103512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.103638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.103717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.103821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.103908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.103976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.105618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.105721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.105794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.105867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.105926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.106909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.106978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.107058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.107153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.107228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.107303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.107368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.107448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.107518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.107612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.107675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.107759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.107830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.107898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.107967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.108107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.108258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.108379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.108518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.108666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.108805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.108883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.108943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.109057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.109185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.109337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.109468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.109612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.109736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.109894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.109959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.110860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.110926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.111002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.111134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.111271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.111403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.111710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.111846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.111916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.111982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.112112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.112249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.112387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.112508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.872 [2024-05-15 09:17:32.112649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.112791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.112858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.112934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.113001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.113062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.113127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.113202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.113270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.113423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.113498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.113592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.872 [2024-05-15 09:17:32.113664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.872 [2024-05-15 09:17:32.113730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.113792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.113852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.113927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.113992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.114856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.114929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.115917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.115982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.116047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.116882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.116946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.117928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.117984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.118116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.118252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.118374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.118505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.118637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.118772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.118913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.118971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.119857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.119947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.120896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.120955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.121027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.121090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.121214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.122134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.873 [2024-05-15 09:17:32.122237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.122311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.122384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.122468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.122532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.122620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.122681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.122757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.873 [2024-05-15 09:17:32.122831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.873 [2024-05-15 09:17:32.122921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:32.123011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:32.123086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:32.123145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:32.123217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:32.123269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:32.123365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:32.123439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.367061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.367764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.367909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.367985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.368059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.368131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.369303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.369438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.369524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.369662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.369745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.369813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.369891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.369957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.370103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.370253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.370402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.370538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.370871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.370947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.371004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.371139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.371275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.371411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.371562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.371720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.371885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.371952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.372862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.372938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.373000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.373146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.373300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.373375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.373447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.373512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.373596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.373663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.373737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.373822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.373886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.373956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.374086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.374215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.374348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.374481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.374633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.374764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.374894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.374963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.375034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.375171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.375309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.375433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.375588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.375733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.375921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.375997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.376062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.376195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.376338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.376468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.874 [2024-05-15 09:17:48.376622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.376761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.376897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.376974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.377051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.874 [2024-05-15 09:17:48.377123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.874 [2024-05-15 09:17:48.377181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.377315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.377391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.377462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.377533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.377613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.377686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.377816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.377882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.377950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.378016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.378080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.378151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.378214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.378275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.378349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.378408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.378495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.378596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.378666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.378732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.380294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.380418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.380502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.380580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.380657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.380726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.380800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.380858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.380925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.380987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.381113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.381242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.381376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.381505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.381656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.381804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.381872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.381933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.382094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.382241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.382370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.382505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.382658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.382789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.382924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.382988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.383040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.383169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.383289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.383435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.383581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.383720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.383876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.383941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.384000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.384129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.384267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.384391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.384695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.384848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.384908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.384961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.385103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.385233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.385364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.385494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.385637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.385777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.385835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.387332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.387454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.387558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.387632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.387703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.387783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.387845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.387910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.387983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.388041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.388169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.388309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.388431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.388589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.388714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.388837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.388905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.388965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.389121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.389253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.389394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.389516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.389663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.389790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.389927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.389996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.390059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.390127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.875 [2024-05-15 09:17:48.390189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.390253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.390383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.875 [2024-05-15 09:17:48.390443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.875 [2024-05-15 09:17:48.390506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.390592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.390669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.390737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.390805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.390858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.390921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.390987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.391057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.391121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.391186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.391249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.391321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.391373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.391445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.391515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.391595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.391657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.392780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.392898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.392991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.393056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.393180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.393318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.393447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.393655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.393789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.393922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.393986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.394053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.394176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.394250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.876 [2024-05-15 09:17:48.394323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.876 [2024-05-15 09:17:48.394401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.876 [2024-05-15 09:17:48.394468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.876 Received shutdown signal, test time was about 33.167843 seconds 00:24:38.876 00:24:38.876 Latency(us) 00:24:38.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.876 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:38.876 Verification LBA range: start 0x0 length 0x4000 00:24:38.876 Nvme0n1 : 33.17 9344.31 36.50 0.00 0.00 13671.34 214.55 4042510.14 00:24:38.876 =================================================================================================================== 00:24:38.876 Total : 9344.31 36.50 0.00 0.00 13671.34 214.55 4042510.14 00:24:38.876 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.133 rmmod nvme_tcp 00:24:39.133 rmmod nvme_fabrics 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 75883 ']' 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 75883 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 75883 ']' 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 75883 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75883 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75883' 00:24:39.133 killing process with pid 75883 00:24:39.133 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 75883 00:24:39.133 [2024-05-15 09:17:51.511124] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 75883 00:24:39.133 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:39.391 ************************************ 00:24:39.391 END TEST nvmf_host_multipath_status 00:24:39.391 ************************************ 00:24:39.391 00:24:39.391 real 0m39.543s 00:24:39.391 user 2m4.642s 00:24:39.391 sys 0m14.031s 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:39.391 09:17:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:39.649 09:17:51 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:39.649 09:17:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:39.649 09:17:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:39.649 09:17:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.649 ************************************ 00:24:39.649 START TEST nvmf_discovery_remove_ifc 00:24:39.649 ************************************ 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:39.649 * Looking for test storage... 00:24:39.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.649 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:39.650 09:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:39.650 Cannot find device "nvmf_tgt_br" 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:39.650 Cannot find device "nvmf_tgt_br2" 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:39.650 Cannot find device "nvmf_tgt_br" 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:39.650 Cannot find device "nvmf_tgt_br2" 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:24:39.650 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:39.927 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:39.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:24:39.928 00:24:39.928 --- 10.0.0.2 ping statistics --- 00:24:39.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.928 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:39.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:39.928 00:24:39.928 --- 10.0.0.3 ping statistics --- 00:24:39.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.928 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:39.928 00:24:39.928 --- 10.0.0.1 ping statistics --- 00:24:39.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.928 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.928 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76723 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76723 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 76723 ']' 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:40.206 09:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.206 [2024-05-15 09:17:52.421183] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:24:40.206 [2024-05-15 09:17:52.421452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.206 [2024-05-15 09:17:52.554960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.465 [2024-05-15 09:17:52.676641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.465 [2024-05-15 09:17:52.676923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.465 [2024-05-15 09:17:52.677035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.465 [2024-05-15 09:17:52.677133] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.465 [2024-05-15 09:17:52.677169] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.465 [2024-05-15 09:17:52.677261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.032 [2024-05-15 09:17:53.378651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.032 [2024-05-15 09:17:53.386596] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:41.032 [2024-05-15 09:17:53.387100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:41.032 null0 00:24:41.032 [2024-05-15 09:17:53.418772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76755 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76755 /tmp/host.sock 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 76755 ']' 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:41.032 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:41.032 09:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.291 [2024-05-15 09:17:53.489127] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:24:41.291 [2024-05-15 09:17:53.489442] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76755 ] 00:24:41.291 [2024-05-15 09:17:53.625900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.291 [2024-05-15 09:17:53.730500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.228 09:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.161 [2024-05-15 09:17:55.531784] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:43.161 [2024-05-15 09:17:55.532070] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:43.161 [2024-05-15 09:17:55.532139] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:43.161 [2024-05-15 09:17:55.537826] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:43.161 [2024-05-15 09:17:55.594386] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:43.161 [2024-05-15 09:17:55.594742] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:43.161 [2024-05-15 09:17:55.594811] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:43.161 [2024-05-15 09:17:55.594911] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:43.161 [2024-05-15 09:17:55.595076] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:43.161 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.161 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:43.161 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.162 [2024-05-15 09:17:55.600564] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c19fd0 was disconnected and freed. delete nvme_qpair. 00:24:43.162 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.162 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.162 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.162 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.162 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.162 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.419 09:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.355 09:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.730 09:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:46.666 09:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:47.602 09:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.535 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.795 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:48.795 09:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.795 [2024-05-15 09:18:01.032158] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:48.795 [2024-05-15 09:18:01.032250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.795 [2024-05-15 09:18:01.032266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.795 [2024-05-15 09:18:01.032282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.795 [2024-05-15 09:18:01.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.795 [2024-05-15 09:18:01.032305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.795 [2024-05-15 09:18:01.032315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.795 [2024-05-15 09:18:01.032327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.795 [2024-05-15 09:18:01.032337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.795 [2024-05-15 09:18:01.032349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.795 [2024-05-15 09:18:01.032360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.795 [2024-05-15 09:18:01.032371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b80e00 is same with the state(5) to be set 00:24:48.795 [2024-05-15 09:18:01.042146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b80e00 (9): Bad file descriptor 00:24:48.795 [2024-05-15 09:18:01.052179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.732 09:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.732 [2024-05-15 09:18:02.092619] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:24:51.107 [2024-05-15 09:18:03.116629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:52.042 [2024-05-15 09:18:04.140645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:52.042 [2024-05-15 09:18:04.140789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b80e00 with addr=10.0.0.2, port=4420 00:24:52.042 [2024-05-15 09:18:04.140832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b80e00 is same with the state(5) to be set 00:24:52.042 [2024-05-15 09:18:04.141747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b80e00 (9): Bad file descriptor 00:24:52.042 [2024-05-15 09:18:04.141826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.042 [2024-05-15 09:18:04.141877] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:52.042 [2024-05-15 09:18:04.141950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.042 [2024-05-15 09:18:04.141982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.042 [2024-05-15 09:18:04.142014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.042 [2024-05-15 09:18:04.142041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.042 [2024-05-15 09:18:04.142070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.042 [2024-05-15 09:18:04.142097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.042 [2024-05-15 09:18:04.142124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.042 [2024-05-15 09:18:04.142150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.042 [2024-05-15 09:18:04.142178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.042 [2024-05-15 09:18:04.142204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.042 [2024-05-15 09:18:04.142231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:52.042 [2024-05-15 09:18:04.142290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b80670 (9): Bad file descriptor 00:24:52.042 [2024-05-15 09:18:04.143297] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:52.042 [2024-05-15 09:18:04.143356] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:52.042 09:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.042 09:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.042 09:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:53.033 09:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.968 [2024-05-15 09:18:06.147986] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:53.968 [2024-05-15 09:18:06.148028] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:53.968 [2024-05-15 09:18:06.148049] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:53.968 [2024-05-15 09:18:06.154024] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:53.968 [2024-05-15 09:18:06.209258] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:53.968 [2024-05-15 09:18:06.209341] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:53.968 [2024-05-15 09:18:06.209363] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:53.968 [2024-05-15 09:18:06.209383] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:53.968 [2024-05-15 09:18:06.209393] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:53.968 [2024-05-15 09:18:06.216762] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bed550 was disconnected and freed. delete nvme_qpair. 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76755 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 76755 ']' 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 76755 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 76755 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 76755' 00:24:53.968 killing process with pid 76755 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 76755 00:24:53.968 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 76755 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.227 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.227 rmmod nvme_tcp 00:24:54.485 rmmod nvme_fabrics 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76723 ']' 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76723 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 76723 ']' 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 76723 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 76723 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 76723' 00:24:54.485 killing process with pid 76723 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 76723 00:24:54.485 [2024-05-15 09:18:06.731090] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:54.485 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 76723 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.744 09:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.744 09:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:54.744 00:24:54.744 real 0m15.143s 00:24:54.744 user 0m23.538s 00:24:54.744 sys 0m3.172s 00:24:54.744 09:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:54.744 ************************************ 00:24:54.744 END TEST nvmf_discovery_remove_ifc 00:24:54.744 09:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.744 ************************************ 00:24:54.744 09:18:07 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:54.744 09:18:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:54.744 09:18:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:54.744 09:18:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.744 ************************************ 00:24:54.744 START TEST nvmf_identify_kernel_target 00:24:54.744 ************************************ 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:54.744 * Looking for test storage... 00:24:54.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.744 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:55.003 Cannot find device "nvmf_tgt_br" 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:55.003 Cannot find device "nvmf_tgt_br2" 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:55.003 Cannot find device "nvmf_tgt_br" 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:55.003 Cannot find device "nvmf_tgt_br2" 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:55.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:55.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:55.003 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:55.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:24:55.262 00:24:55.262 --- 10.0.0.2 ping statistics --- 00:24:55.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.262 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:55.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:55.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:24:55.262 00:24:55.262 --- 10.0.0.3 ping statistics --- 00:24:55.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.262 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:55.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:24:55.262 00:24:55.262 --- 10.0.0.1 ping statistics --- 00:24:55.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.262 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:55.262 09:18:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:55.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:55.826 Waiting for block devices as requested 00:24:55.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:56.084 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:56.084 No valid GPT data, bailing 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:56.084 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:56.360 No valid GPT data, bailing 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:56.360 No valid GPT data, bailing 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:56.360 No valid GPT data, bailing 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -a 10.0.0.1 -t tcp -s 4420 00:24:56.360 00:24:56.360 Discovery Log Number of Records 2, Generation counter 2 00:24:56.360 =====Discovery Log Entry 0====== 00:24:56.360 trtype: tcp 00:24:56.360 adrfam: ipv4 00:24:56.360 subtype: current discovery subsystem 00:24:56.360 treq: not specified, sq flow control disable supported 00:24:56.360 portid: 1 00:24:56.360 trsvcid: 4420 00:24:56.360 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:56.360 traddr: 10.0.0.1 00:24:56.360 eflags: none 00:24:56.360 sectype: none 00:24:56.360 =====Discovery Log Entry 1====== 00:24:56.360 trtype: tcp 00:24:56.360 adrfam: ipv4 00:24:56.360 subtype: nvme subsystem 00:24:56.360 treq: not specified, sq flow control disable supported 00:24:56.360 portid: 1 00:24:56.360 trsvcid: 4420 00:24:56.360 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:56.360 traddr: 10.0.0.1 00:24:56.360 eflags: none 00:24:56.360 sectype: none 00:24:56.360 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:56.360 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:56.619 ===================================================== 00:24:56.619 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:56.619 ===================================================== 00:24:56.619 Controller Capabilities/Features 00:24:56.619 ================================ 00:24:56.619 Vendor ID: 0000 00:24:56.619 Subsystem Vendor ID: 0000 00:24:56.619 Serial Number: 33a07adb48001b8dd12e 00:24:56.619 Model Number: Linux 00:24:56.619 Firmware Version: 6.5.12-2 00:24:56.619 Recommended Arb Burst: 0 00:24:56.619 IEEE OUI Identifier: 00 00 00 00:24:56.619 Multi-path I/O 00:24:56.619 May have multiple subsystem ports: No 00:24:56.619 May have multiple controllers: No 00:24:56.619 Associated with SR-IOV VF: No 00:24:56.619 Max Data Transfer Size: Unlimited 00:24:56.619 Max Number of Namespaces: 0 00:24:56.619 Max Number of I/O Queues: 1024 00:24:56.619 NVMe Specification Version (VS): 1.3 00:24:56.619 NVMe Specification Version (Identify): 1.3 00:24:56.619 Maximum Queue Entries: 1024 00:24:56.619 Contiguous Queues Required: No 00:24:56.619 Arbitration Mechanisms Supported 00:24:56.619 Weighted Round Robin: Not Supported 00:24:56.619 Vendor Specific: Not Supported 00:24:56.619 Reset Timeout: 7500 ms 00:24:56.619 Doorbell Stride: 4 bytes 00:24:56.619 NVM Subsystem Reset: Not Supported 00:24:56.619 Command Sets Supported 00:24:56.619 NVM Command Set: Supported 00:24:56.619 Boot Partition: Not Supported 00:24:56.619 Memory Page Size Minimum: 4096 bytes 00:24:56.619 Memory Page Size Maximum: 4096 bytes 00:24:56.619 Persistent Memory Region: Not Supported 00:24:56.619 Optional Asynchronous Events Supported 00:24:56.619 Namespace Attribute Notices: Not Supported 00:24:56.619 Firmware Activation Notices: Not Supported 00:24:56.619 ANA Change Notices: Not Supported 00:24:56.619 PLE Aggregate Log Change Notices: Not Supported 00:24:56.619 LBA Status Info Alert Notices: Not Supported 00:24:56.619 EGE Aggregate Log Change Notices: Not Supported 00:24:56.619 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.619 Zone Descriptor Change Notices: Not Supported 00:24:56.619 Discovery Log Change Notices: Supported 00:24:56.619 Controller Attributes 00:24:56.619 128-bit Host Identifier: Not Supported 00:24:56.619 Non-Operational Permissive Mode: Not Supported 00:24:56.619 NVM Sets: Not Supported 00:24:56.619 Read Recovery Levels: Not Supported 00:24:56.619 Endurance Groups: Not Supported 00:24:56.619 Predictable Latency Mode: Not Supported 00:24:56.619 Traffic Based Keep ALive: Not Supported 00:24:56.619 Namespace Granularity: Not Supported 00:24:56.619 SQ Associations: Not Supported 00:24:56.619 UUID List: Not Supported 00:24:56.619 Multi-Domain Subsystem: Not Supported 00:24:56.619 Fixed Capacity Management: Not Supported 00:24:56.619 Variable Capacity Management: Not Supported 00:24:56.619 Delete Endurance Group: Not Supported 00:24:56.619 Delete NVM Set: Not Supported 00:24:56.619 Extended LBA Formats Supported: Not Supported 00:24:56.619 Flexible Data Placement Supported: Not Supported 00:24:56.619 00:24:56.619 Controller Memory Buffer Support 00:24:56.619 ================================ 00:24:56.619 Supported: No 00:24:56.619 00:24:56.619 Persistent Memory Region Support 00:24:56.619 ================================ 00:24:56.619 Supported: No 00:24:56.619 00:24:56.619 Admin Command Set Attributes 00:24:56.619 ============================ 00:24:56.619 Security Send/Receive: Not Supported 00:24:56.619 Format NVM: Not Supported 00:24:56.619 Firmware Activate/Download: Not Supported 00:24:56.619 Namespace Management: Not Supported 00:24:56.619 Device Self-Test: Not Supported 00:24:56.619 Directives: Not Supported 00:24:56.619 NVMe-MI: Not Supported 00:24:56.619 Virtualization Management: Not Supported 00:24:56.619 Doorbell Buffer Config: Not Supported 00:24:56.619 Get LBA Status Capability: Not Supported 00:24:56.619 Command & Feature Lockdown Capability: Not Supported 00:24:56.619 Abort Command Limit: 1 00:24:56.619 Async Event Request Limit: 1 00:24:56.619 Number of Firmware Slots: N/A 00:24:56.619 Firmware Slot 1 Read-Only: N/A 00:24:56.619 Firmware Activation Without Reset: N/A 00:24:56.619 Multiple Update Detection Support: N/A 00:24:56.619 Firmware Update Granularity: No Information Provided 00:24:56.619 Per-Namespace SMART Log: No 00:24:56.619 Asymmetric Namespace Access Log Page: Not Supported 00:24:56.619 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:56.619 Command Effects Log Page: Not Supported 00:24:56.619 Get Log Page Extended Data: Supported 00:24:56.619 Telemetry Log Pages: Not Supported 00:24:56.619 Persistent Event Log Pages: Not Supported 00:24:56.619 Supported Log Pages Log Page: May Support 00:24:56.619 Commands Supported & Effects Log Page: Not Supported 00:24:56.619 Feature Identifiers & Effects Log Page:May Support 00:24:56.619 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.619 Data Area 4 for Telemetry Log: Not Supported 00:24:56.619 Error Log Page Entries Supported: 1 00:24:56.619 Keep Alive: Not Supported 00:24:56.619 00:24:56.619 NVM Command Set Attributes 00:24:56.619 ========================== 00:24:56.619 Submission Queue Entry Size 00:24:56.619 Max: 1 00:24:56.619 Min: 1 00:24:56.619 Completion Queue Entry Size 00:24:56.619 Max: 1 00:24:56.619 Min: 1 00:24:56.619 Number of Namespaces: 0 00:24:56.619 Compare Command: Not Supported 00:24:56.619 Write Uncorrectable Command: Not Supported 00:24:56.619 Dataset Management Command: Not Supported 00:24:56.619 Write Zeroes Command: Not Supported 00:24:56.619 Set Features Save Field: Not Supported 00:24:56.619 Reservations: Not Supported 00:24:56.619 Timestamp: Not Supported 00:24:56.619 Copy: Not Supported 00:24:56.619 Volatile Write Cache: Not Present 00:24:56.619 Atomic Write Unit (Normal): 1 00:24:56.619 Atomic Write Unit (PFail): 1 00:24:56.619 Atomic Compare & Write Unit: 1 00:24:56.619 Fused Compare & Write: Not Supported 00:24:56.619 Scatter-Gather List 00:24:56.619 SGL Command Set: Supported 00:24:56.619 SGL Keyed: Not Supported 00:24:56.619 SGL Bit Bucket Descriptor: Not Supported 00:24:56.619 SGL Metadata Pointer: Not Supported 00:24:56.619 Oversized SGL: Not Supported 00:24:56.619 SGL Metadata Address: Not Supported 00:24:56.619 SGL Offset: Supported 00:24:56.619 Transport SGL Data Block: Not Supported 00:24:56.619 Replay Protected Memory Block: Not Supported 00:24:56.619 00:24:56.619 Firmware Slot Information 00:24:56.619 ========================= 00:24:56.619 Active slot: 0 00:24:56.619 00:24:56.619 00:24:56.619 Error Log 00:24:56.619 ========= 00:24:56.619 00:24:56.619 Active Namespaces 00:24:56.619 ================= 00:24:56.619 Discovery Log Page 00:24:56.619 ================== 00:24:56.619 Generation Counter: 2 00:24:56.619 Number of Records: 2 00:24:56.619 Record Format: 0 00:24:56.619 00:24:56.619 Discovery Log Entry 0 00:24:56.619 ---------------------- 00:24:56.619 Transport Type: 3 (TCP) 00:24:56.619 Address Family: 1 (IPv4) 00:24:56.619 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:56.619 Entry Flags: 00:24:56.619 Duplicate Returned Information: 0 00:24:56.619 Explicit Persistent Connection Support for Discovery: 0 00:24:56.619 Transport Requirements: 00:24:56.619 Secure Channel: Not Specified 00:24:56.619 Port ID: 1 (0x0001) 00:24:56.619 Controller ID: 65535 (0xffff) 00:24:56.619 Admin Max SQ Size: 32 00:24:56.619 Transport Service Identifier: 4420 00:24:56.619 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:56.619 Transport Address: 10.0.0.1 00:24:56.619 Discovery Log Entry 1 00:24:56.620 ---------------------- 00:24:56.620 Transport Type: 3 (TCP) 00:24:56.620 Address Family: 1 (IPv4) 00:24:56.620 Subsystem Type: 2 (NVM Subsystem) 00:24:56.620 Entry Flags: 00:24:56.620 Duplicate Returned Information: 0 00:24:56.620 Explicit Persistent Connection Support for Discovery: 0 00:24:56.620 Transport Requirements: 00:24:56.620 Secure Channel: Not Specified 00:24:56.620 Port ID: 1 (0x0001) 00:24:56.620 Controller ID: 65535 (0xffff) 00:24:56.620 Admin Max SQ Size: 32 00:24:56.620 Transport Service Identifier: 4420 00:24:56.620 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:56.620 Transport Address: 10.0.0.1 00:24:56.620 09:18:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:56.878 get_feature(0x01) failed 00:24:56.878 get_feature(0x02) failed 00:24:56.878 get_feature(0x04) failed 00:24:56.878 ===================================================== 00:24:56.878 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:56.878 ===================================================== 00:24:56.878 Controller Capabilities/Features 00:24:56.878 ================================ 00:24:56.878 Vendor ID: 0000 00:24:56.878 Subsystem Vendor ID: 0000 00:24:56.878 Serial Number: 46ef1b9dc96129510336 00:24:56.878 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:56.878 Firmware Version: 6.5.12-2 00:24:56.878 Recommended Arb Burst: 6 00:24:56.878 IEEE OUI Identifier: 00 00 00 00:24:56.878 Multi-path I/O 00:24:56.878 May have multiple subsystem ports: Yes 00:24:56.878 May have multiple controllers: Yes 00:24:56.878 Associated with SR-IOV VF: No 00:24:56.878 Max Data Transfer Size: Unlimited 00:24:56.878 Max Number of Namespaces: 1024 00:24:56.878 Max Number of I/O Queues: 128 00:24:56.878 NVMe Specification Version (VS): 1.3 00:24:56.878 NVMe Specification Version (Identify): 1.3 00:24:56.878 Maximum Queue Entries: 1024 00:24:56.878 Contiguous Queues Required: No 00:24:56.878 Arbitration Mechanisms Supported 00:24:56.878 Weighted Round Robin: Not Supported 00:24:56.878 Vendor Specific: Not Supported 00:24:56.878 Reset Timeout: 7500 ms 00:24:56.878 Doorbell Stride: 4 bytes 00:24:56.878 NVM Subsystem Reset: Not Supported 00:24:56.878 Command Sets Supported 00:24:56.878 NVM Command Set: Supported 00:24:56.878 Boot Partition: Not Supported 00:24:56.878 Memory Page Size Minimum: 4096 bytes 00:24:56.878 Memory Page Size Maximum: 4096 bytes 00:24:56.878 Persistent Memory Region: Not Supported 00:24:56.878 Optional Asynchronous Events Supported 00:24:56.878 Namespace Attribute Notices: Supported 00:24:56.878 Firmware Activation Notices: Not Supported 00:24:56.878 ANA Change Notices: Supported 00:24:56.878 PLE Aggregate Log Change Notices: Not Supported 00:24:56.878 LBA Status Info Alert Notices: Not Supported 00:24:56.878 EGE Aggregate Log Change Notices: Not Supported 00:24:56.878 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.878 Zone Descriptor Change Notices: Not Supported 00:24:56.878 Discovery Log Change Notices: Not Supported 00:24:56.878 Controller Attributes 00:24:56.878 128-bit Host Identifier: Supported 00:24:56.878 Non-Operational Permissive Mode: Not Supported 00:24:56.878 NVM Sets: Not Supported 00:24:56.878 Read Recovery Levels: Not Supported 00:24:56.878 Endurance Groups: Not Supported 00:24:56.878 Predictable Latency Mode: Not Supported 00:24:56.878 Traffic Based Keep ALive: Supported 00:24:56.878 Namespace Granularity: Not Supported 00:24:56.878 SQ Associations: Not Supported 00:24:56.878 UUID List: Not Supported 00:24:56.878 Multi-Domain Subsystem: Not Supported 00:24:56.878 Fixed Capacity Management: Not Supported 00:24:56.878 Variable Capacity Management: Not Supported 00:24:56.878 Delete Endurance Group: Not Supported 00:24:56.878 Delete NVM Set: Not Supported 00:24:56.878 Extended LBA Formats Supported: Not Supported 00:24:56.878 Flexible Data Placement Supported: Not Supported 00:24:56.878 00:24:56.878 Controller Memory Buffer Support 00:24:56.878 ================================ 00:24:56.878 Supported: No 00:24:56.878 00:24:56.878 Persistent Memory Region Support 00:24:56.878 ================================ 00:24:56.878 Supported: No 00:24:56.878 00:24:56.878 Admin Command Set Attributes 00:24:56.878 ============================ 00:24:56.878 Security Send/Receive: Not Supported 00:24:56.878 Format NVM: Not Supported 00:24:56.878 Firmware Activate/Download: Not Supported 00:24:56.878 Namespace Management: Not Supported 00:24:56.878 Device Self-Test: Not Supported 00:24:56.878 Directives: Not Supported 00:24:56.878 NVMe-MI: Not Supported 00:24:56.878 Virtualization Management: Not Supported 00:24:56.878 Doorbell Buffer Config: Not Supported 00:24:56.878 Get LBA Status Capability: Not Supported 00:24:56.878 Command & Feature Lockdown Capability: Not Supported 00:24:56.878 Abort Command Limit: 4 00:24:56.878 Async Event Request Limit: 4 00:24:56.878 Number of Firmware Slots: N/A 00:24:56.878 Firmware Slot 1 Read-Only: N/A 00:24:56.878 Firmware Activation Without Reset: N/A 00:24:56.878 Multiple Update Detection Support: N/A 00:24:56.878 Firmware Update Granularity: No Information Provided 00:24:56.878 Per-Namespace SMART Log: Yes 00:24:56.878 Asymmetric Namespace Access Log Page: Supported 00:24:56.878 ANA Transition Time : 10 sec 00:24:56.878 00:24:56.878 Asymmetric Namespace Access Capabilities 00:24:56.878 ANA Optimized State : Supported 00:24:56.878 ANA Non-Optimized State : Supported 00:24:56.878 ANA Inaccessible State : Supported 00:24:56.878 ANA Persistent Loss State : Supported 00:24:56.878 ANA Change State : Supported 00:24:56.878 ANAGRPID is not changed : No 00:24:56.878 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:56.878 00:24:56.878 ANA Group Identifier Maximum : 128 00:24:56.878 Number of ANA Group Identifiers : 128 00:24:56.878 Max Number of Allowed Namespaces : 1024 00:24:56.878 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:56.878 Command Effects Log Page: Supported 00:24:56.878 Get Log Page Extended Data: Supported 00:24:56.878 Telemetry Log Pages: Not Supported 00:24:56.878 Persistent Event Log Pages: Not Supported 00:24:56.878 Supported Log Pages Log Page: May Support 00:24:56.878 Commands Supported & Effects Log Page: Not Supported 00:24:56.878 Feature Identifiers & Effects Log Page:May Support 00:24:56.878 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.878 Data Area 4 for Telemetry Log: Not Supported 00:24:56.878 Error Log Page Entries Supported: 128 00:24:56.878 Keep Alive: Supported 00:24:56.878 Keep Alive Granularity: 1000 ms 00:24:56.878 00:24:56.878 NVM Command Set Attributes 00:24:56.878 ========================== 00:24:56.878 Submission Queue Entry Size 00:24:56.878 Max: 64 00:24:56.878 Min: 64 00:24:56.878 Completion Queue Entry Size 00:24:56.878 Max: 16 00:24:56.878 Min: 16 00:24:56.878 Number of Namespaces: 1024 00:24:56.878 Compare Command: Not Supported 00:24:56.878 Write Uncorrectable Command: Not Supported 00:24:56.878 Dataset Management Command: Supported 00:24:56.878 Write Zeroes Command: Supported 00:24:56.878 Set Features Save Field: Not Supported 00:24:56.878 Reservations: Not Supported 00:24:56.878 Timestamp: Not Supported 00:24:56.878 Copy: Not Supported 00:24:56.878 Volatile Write Cache: Present 00:24:56.878 Atomic Write Unit (Normal): 1 00:24:56.878 Atomic Write Unit (PFail): 1 00:24:56.878 Atomic Compare & Write Unit: 1 00:24:56.878 Fused Compare & Write: Not Supported 00:24:56.878 Scatter-Gather List 00:24:56.878 SGL Command Set: Supported 00:24:56.878 SGL Keyed: Not Supported 00:24:56.878 SGL Bit Bucket Descriptor: Not Supported 00:24:56.878 SGL Metadata Pointer: Not Supported 00:24:56.878 Oversized SGL: Not Supported 00:24:56.878 SGL Metadata Address: Not Supported 00:24:56.878 SGL Offset: Supported 00:24:56.878 Transport SGL Data Block: Not Supported 00:24:56.878 Replay Protected Memory Block: Not Supported 00:24:56.878 00:24:56.878 Firmware Slot Information 00:24:56.878 ========================= 00:24:56.878 Active slot: 0 00:24:56.878 00:24:56.878 Asymmetric Namespace Access 00:24:56.878 =========================== 00:24:56.878 Change Count : 0 00:24:56.878 Number of ANA Group Descriptors : 1 00:24:56.878 ANA Group Descriptor : 0 00:24:56.878 ANA Group ID : 1 00:24:56.878 Number of NSID Values : 1 00:24:56.878 Change Count : 0 00:24:56.878 ANA State : 1 00:24:56.878 Namespace Identifier : 1 00:24:56.878 00:24:56.878 Commands Supported and Effects 00:24:56.878 ============================== 00:24:56.878 Admin Commands 00:24:56.878 -------------- 00:24:56.878 Get Log Page (02h): Supported 00:24:56.878 Identify (06h): Supported 00:24:56.878 Abort (08h): Supported 00:24:56.878 Set Features (09h): Supported 00:24:56.878 Get Features (0Ah): Supported 00:24:56.878 Asynchronous Event Request (0Ch): Supported 00:24:56.878 Keep Alive (18h): Supported 00:24:56.878 I/O Commands 00:24:56.878 ------------ 00:24:56.878 Flush (00h): Supported 00:24:56.879 Write (01h): Supported LBA-Change 00:24:56.879 Read (02h): Supported 00:24:56.879 Write Zeroes (08h): Supported LBA-Change 00:24:56.879 Dataset Management (09h): Supported 00:24:56.879 00:24:56.879 Error Log 00:24:56.879 ========= 00:24:56.879 Entry: 0 00:24:56.879 Error Count: 0x3 00:24:56.879 Submission Queue Id: 0x0 00:24:56.879 Command Id: 0x5 00:24:56.879 Phase Bit: 0 00:24:56.879 Status Code: 0x2 00:24:56.879 Status Code Type: 0x0 00:24:56.879 Do Not Retry: 1 00:24:56.879 Error Location: 0x28 00:24:56.879 LBA: 0x0 00:24:56.879 Namespace: 0x0 00:24:56.879 Vendor Log Page: 0x0 00:24:56.879 ----------- 00:24:56.879 Entry: 1 00:24:56.879 Error Count: 0x2 00:24:56.879 Submission Queue Id: 0x0 00:24:56.879 Command Id: 0x5 00:24:56.879 Phase Bit: 0 00:24:56.879 Status Code: 0x2 00:24:56.879 Status Code Type: 0x0 00:24:56.879 Do Not Retry: 1 00:24:56.879 Error Location: 0x28 00:24:56.879 LBA: 0x0 00:24:56.879 Namespace: 0x0 00:24:56.879 Vendor Log Page: 0x0 00:24:56.879 ----------- 00:24:56.879 Entry: 2 00:24:56.879 Error Count: 0x1 00:24:56.879 Submission Queue Id: 0x0 00:24:56.879 Command Id: 0x4 00:24:56.879 Phase Bit: 0 00:24:56.879 Status Code: 0x2 00:24:56.879 Status Code Type: 0x0 00:24:56.879 Do Not Retry: 1 00:24:56.879 Error Location: 0x28 00:24:56.879 LBA: 0x0 00:24:56.879 Namespace: 0x0 00:24:56.879 Vendor Log Page: 0x0 00:24:56.879 00:24:56.879 Number of Queues 00:24:56.879 ================ 00:24:56.879 Number of I/O Submission Queues: 128 00:24:56.879 Number of I/O Completion Queues: 128 00:24:56.879 00:24:56.879 ZNS Specific Controller Data 00:24:56.879 ============================ 00:24:56.879 Zone Append Size Limit: 0 00:24:56.879 00:24:56.879 00:24:56.879 Active Namespaces 00:24:56.879 ================= 00:24:56.879 get_feature(0x05) failed 00:24:56.879 Namespace ID:1 00:24:56.879 Command Set Identifier: NVM (00h) 00:24:56.879 Deallocate: Supported 00:24:56.879 Deallocated/Unwritten Error: Not Supported 00:24:56.879 Deallocated Read Value: Unknown 00:24:56.879 Deallocate in Write Zeroes: Not Supported 00:24:56.879 Deallocated Guard Field: 0xFFFF 00:24:56.879 Flush: Supported 00:24:56.879 Reservation: Not Supported 00:24:56.879 Namespace Sharing Capabilities: Multiple Controllers 00:24:56.879 Size (in LBAs): 1310720 (5GiB) 00:24:56.879 Capacity (in LBAs): 1310720 (5GiB) 00:24:56.879 Utilization (in LBAs): 1310720 (5GiB) 00:24:56.879 UUID: b2724e72-35f7-47aa-a38d-228d19fb9241 00:24:56.879 Thin Provisioning: Not Supported 00:24:56.879 Per-NS Atomic Units: Yes 00:24:56.879 Atomic Boundary Size (Normal): 0 00:24:56.879 Atomic Boundary Size (PFail): 0 00:24:56.879 Atomic Boundary Offset: 0 00:24:56.879 NGUID/EUI64 Never Reused: No 00:24:56.879 ANA group ID: 1 00:24:56.879 Namespace Write Protected: No 00:24:56.879 Number of LBA Formats: 1 00:24:56.879 Current LBA Format: LBA Format #00 00:24:56.879 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:24:56.879 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.879 rmmod nvme_tcp 00:24:56.879 rmmod nvme_fabrics 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:56.879 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:57.137 09:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:57.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:57.961 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:57.961 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:57.961 ************************************ 00:24:57.961 END TEST nvmf_identify_kernel_target 00:24:57.961 ************************************ 00:24:57.961 00:24:57.961 real 0m3.297s 00:24:57.961 user 0m1.071s 00:24:57.961 sys 0m1.666s 00:24:57.961 09:18:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:57.961 09:18:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.219 09:18:10 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.219 09:18:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:58.219 09:18:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:58.219 09:18:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.219 ************************************ 00:24:58.219 START TEST nvmf_auth_host 00:24:58.219 ************************************ 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.219 * Looking for test storage... 00:24:58.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.219 09:18:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:58.220 Cannot find device "nvmf_tgt_br" 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.220 Cannot find device "nvmf_tgt_br2" 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:58.220 Cannot find device "nvmf_tgt_br" 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:58.220 Cannot find device "nvmf_tgt_br2" 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:24:58.220 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:58.478 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:58.478 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.478 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:24:58.478 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.478 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:58.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:24:58.479 00:24:58.479 --- 10.0.0.2 ping statistics --- 00:24:58.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.479 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:58.479 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:58.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:24:58.737 00:24:58.737 --- 10.0.0.3 ping statistics --- 00:24:58.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.737 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:58.737 00:24:58.737 --- 10.0.0.1 ping statistics --- 00:24:58.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.737 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.737 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77649 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77649 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 77649 ']' 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:58.738 09:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.995 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:58.996 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1228141263959bd1b92cfb8b0143c814 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.79a 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1228141263959bd1b92cfb8b0143c814 0 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1228141263959bd1b92cfb8b0143c814 0 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1228141263959bd1b92cfb8b0143c814 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.79a 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.79a 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.79a 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b4d86361879438352ff9354e824c4b26935ec43cab123003b72e7cf329c9404 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.n5c 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b4d86361879438352ff9354e824c4b26935ec43cab123003b72e7cf329c9404 3 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b4d86361879438352ff9354e824c4b26935ec43cab123003b72e7cf329c9404 3 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b4d86361879438352ff9354e824c4b26935ec43cab123003b72e7cf329c9404 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.n5c 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.n5c 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.n5c 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4994f53db59f9e4894d43de0a257f00c107ec52c19d21dfc 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XzM 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4994f53db59f9e4894d43de0a257f00c107ec52c19d21dfc 0 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4994f53db59f9e4894d43de0a257f00c107ec52c19d21dfc 0 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4994f53db59f9e4894d43de0a257f00c107ec52c19d21dfc 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XzM 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XzM 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XzM 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3ce026d22f9d055f730c7b8005c98644ff5ecb50dd3b3cf9 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Xqm 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3ce026d22f9d055f730c7b8005c98644ff5ecb50dd3b3cf9 2 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3ce026d22f9d055f730c7b8005c98644ff5ecb50dd3b3cf9 2 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3ce026d22f9d055f730c7b8005c98644ff5ecb50dd3b3cf9 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:59.255 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Xqm 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Xqm 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Xqm 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20c1c74b27d1442981ac16d5ea87ae77 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ur9 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20c1c74b27d1442981ac16d5ea87ae77 1 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20c1c74b27d1442981ac16d5ea87ae77 1 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20c1c74b27d1442981ac16d5ea87ae77 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:59.513 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ur9 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ur9 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ur9 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a908ec9dc15da00bd14665926a8b2d37 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.buI 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a908ec9dc15da00bd14665926a8b2d37 1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a908ec9dc15da00bd14665926a8b2d37 1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a908ec9dc15da00bd14665926a8b2d37 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.buI 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.buI 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.buI 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b07065f560a8387fecfe1ecb27e153bae43436d20e86294c 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5LH 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b07065f560a8387fecfe1ecb27e153bae43436d20e86294c 2 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b07065f560a8387fecfe1ecb27e153bae43436d20e86294c 2 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b07065f560a8387fecfe1ecb27e153bae43436d20e86294c 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5LH 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5LH 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5LH 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e703f1d83a12b780d4854af685f11c2a 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Yw1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e703f1d83a12b780d4854af685f11c2a 0 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e703f1d83a12b780d4854af685f11c2a 0 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e703f1d83a12b780d4854af685f11c2a 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:59.514 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Yw1 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Yw1 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Yw1 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:59.771 09:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:59.771 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d3fee19965f9dc56d6efcc1c70158245c2cb42d183ca061a039e42cce3522bb 00:24:59.771 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Rgb 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d3fee19965f9dc56d6efcc1c70158245c2cb42d183ca061a039e42cce3522bb 3 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d3fee19965f9dc56d6efcc1c70158245c2cb42d183ca061a039e42cce3522bb 3 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d3fee19965f9dc56d6efcc1c70158245c2cb42d183ca061a039e42cce3522bb 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Rgb 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Rgb 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Rgb 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77649 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 77649 ']' 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:59.772 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.029 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.79a 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.n5c ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.n5c 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XzM 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Xqm ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xqm 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ur9 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.buI ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.buI 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5LH 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Yw1 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Yw1 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Rgb 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:00.030 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:00.287 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:00.287 09:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:00.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.544 Waiting for block devices as requested 00:25:00.544 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.804 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:01.370 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:01.629 No valid GPT data, bailing 00:25:01.629 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:01.629 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:01.629 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:01.629 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:01.629 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.629 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:01.630 No valid GPT data, bailing 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:01.630 09:18:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:01.630 No valid GPT data, bailing 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:01.630 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:01.889 No valid GPT data, bailing 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -a 10.0.0.1 -t tcp -s 4420 00:25:01.889 00:25:01.889 Discovery Log Number of Records 2, Generation counter 2 00:25:01.889 =====Discovery Log Entry 0====== 00:25:01.889 trtype: tcp 00:25:01.889 adrfam: ipv4 00:25:01.889 subtype: current discovery subsystem 00:25:01.889 treq: not specified, sq flow control disable supported 00:25:01.889 portid: 1 00:25:01.889 trsvcid: 4420 00:25:01.889 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:01.889 traddr: 10.0.0.1 00:25:01.889 eflags: none 00:25:01.889 sectype: none 00:25:01.889 =====Discovery Log Entry 1====== 00:25:01.889 trtype: tcp 00:25:01.889 adrfam: ipv4 00:25:01.889 subtype: nvme subsystem 00:25:01.889 treq: not specified, sq flow control disable supported 00:25:01.889 portid: 1 00:25:01.889 trsvcid: 4420 00:25:01.889 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:01.889 traddr: 10.0.0.1 00:25:01.889 eflags: none 00:25:01.889 sectype: none 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.889 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.890 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.890 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.890 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.890 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.149 nvme0n1 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.149 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.150 nvme0n1 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.150 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 nvme0n1 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.432 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.433 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.691 nvme0n1 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.691 09:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.691 nvme0n1 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.691 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.950 nvme0n1 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.950 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.209 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.468 nvme0n1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.468 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.727 nvme0n1 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.727 09:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:03.727 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.728 nvme0n1 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.728 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:03.986 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 nvme0n1 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.987 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.246 nvme0n1 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.246 09:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:04.813 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.814 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.072 nvme0n1 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.072 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.073 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.343 nvme0n1 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.343 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 nvme0n1 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.602 09:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.602 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.860 nvme0n1 00:25:05.860 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.860 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.861 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 nvme0n1 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.119 09:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.021 nvme0n1 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.021 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.280 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.540 nvme0n1 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.540 09:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.799 nvme0n1 00:25:08.799 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.799 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.799 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.799 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.799 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.799 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.058 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.317 nvme0n1 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.317 09:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.576 nvme0n1 00:25:09.576 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.835 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.835 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.835 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.835 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.835 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.836 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.403 nvme0n1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.403 09:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.968 nvme0n1 00:25:10.968 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.969 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.554 nvme0n1 00:25:11.554 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.554 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.554 09:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.554 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.554 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.554 09:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.813 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.380 nvme0n1 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.380 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.381 09:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.947 nvme0n1 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.948 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.206 nvme0n1 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.206 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.207 nvme0n1 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.207 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 nvme0n1 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:13.466 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.467 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.727 nvme0n1 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.727 09:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.727 nvme0n1 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.727 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.988 nvme0n1 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.988 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.989 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.989 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.989 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.989 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.989 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.247 nvme0n1 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.247 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.248 nvme0n1 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.248 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.506 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.506 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.506 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.507 nvme0n1 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.507 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.765 09:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.765 nvme0n1 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.765 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.766 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.024 nvme0n1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.024 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.283 nvme0n1 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:15.283 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.284 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.543 nvme0n1 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.543 09:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.802 nvme0n1 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.802 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.062 nvme0n1 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.062 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.320 nvme0n1 00:25:16.320 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.320 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.320 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.320 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.320 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.320 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.579 09:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.839 nvme0n1 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.840 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.416 nvme0n1 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.416 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.417 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.675 nvme0n1 00:25:17.675 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.675 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.675 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.675 09:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.675 09:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.675 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.676 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.242 nvme0n1 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.242 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.243 09:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.809 nvme0n1 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.809 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.810 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.376 nvme0n1 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.376 09:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.013 nvme0n1 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.013 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.580 nvme0n1 00:25:20.580 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.580 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.580 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.580 09:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.580 09:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.580 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.857 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.858 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.430 nvme0n1 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.430 nvme0n1 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.430 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.689 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 nvme0n1 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.690 09:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.690 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 nvme0n1 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 nvme0n1 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 nvme0n1 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.209 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.210 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.469 nvme0n1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.469 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.728 nvme0n1 00:25:22.728 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.728 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.728 09:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.728 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.728 09:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.728 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.729 nvme0n1 00:25:22.729 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.987 nvme0n1 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:22.987 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.246 nvme0n1 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.246 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.247 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.505 nvme0n1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.505 09:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.763 nvme0n1 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.763 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.021 nvme0n1 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.021 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.022 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.280 nvme0n1 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.280 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.567 nvme0n1 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.567 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.568 09:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 nvme0n1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.134 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.393 nvme0n1 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.393 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.652 09:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.911 nvme0n1 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.911 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.170 nvme0n1 00:25:26.170 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.170 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.170 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.170 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.170 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.170 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.428 09:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 nvme0n1 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIyODE0MTI2Mzk1OWJkMWI5MmNmYjhiMDE0M2M4MTTDEV7B: 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI0ZDg2MzYxODc5NDM4MzUyZmY5MzU0ZTgyNGM0YjI2OTM1ZWM0M2NhYjEyMzAwM2I3MmU3Y2YzMjljOTQwNDvLSGk=: 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.686 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.321 nvme0n1 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.321 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.322 09:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.255 nvme0n1 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjMWM3NGIyN2QxNDQyOTgxYWMxNmQ1ZWE4N2FlNzdNQNin: 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkwOGVjOWRjMTVkYTAwYmQxNDY2NTkyNmE4YjJkMzc+qJTh: 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.255 09:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.819 nvme0n1 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA3MDY1ZjU2MGE4Mzg3ZmVjZmUxZWNiMjdlMTUzYmFlNDM0MzZkMjBlODYyOTRj9MXC0w==: 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTcwM2YxZDgzYTEyYjc4MGQ0ODU0YWY2ODVmMTFjMmGxCrKd: 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.819 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.396 nvme0n1 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzZmVlMTk5NjVmOWRjNTZkNmVmY2MxYzcwMTU4MjQ1YzJjYjQyZDE4M2NhMDYxYTAzOWU0MmNjZTM1MjJiYq4+5k4=: 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.396 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.397 09:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.967 nvme0n1 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.967 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.225 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.225 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:30.225 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.225 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk5NGY1M2RiNTlmOWU0ODk0ZDQzZGUwYTI1N2YwMGMxMDdlYzUyYzE5ZDIxZGZjFUia4A==: 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2NlMDI2ZDIyZjlkMDU1ZjczMGM3YjgwMDVjOTg2NDRmZjVlY2I1MGRkM2IzY2Y5UqGitw==: 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 request: 00:25:30.226 { 00:25:30.226 "name": "nvme0", 00:25:30.226 "trtype": "tcp", 00:25:30.226 "traddr": "10.0.0.1", 00:25:30.226 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:30.226 "adrfam": "ipv4", 00:25:30.226 "trsvcid": "4420", 00:25:30.226 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:30.226 "method": "bdev_nvme_attach_controller", 00:25:30.226 "req_id": 1 00:25:30.226 } 00:25:30.226 Got JSON-RPC error response 00:25:30.226 response: 00:25:30.226 { 00:25:30.226 "code": -32602, 00:25:30.226 "message": "Invalid parameters" 00:25:30.226 } 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 request: 00:25:30.226 { 00:25:30.226 "name": "nvme0", 00:25:30.226 "trtype": "tcp", 00:25:30.226 "traddr": "10.0.0.1", 00:25:30.226 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:30.226 "adrfam": "ipv4", 00:25:30.226 "trsvcid": "4420", 00:25:30.226 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:30.226 "dhchap_key": "key2", 00:25:30.226 "method": "bdev_nvme_attach_controller", 00:25:30.226 "req_id": 1 00:25:30.226 } 00:25:30.226 Got JSON-RPC error response 00:25:30.226 response: 00:25:30.226 { 00:25:30.226 "code": -32602, 00:25:30.226 "message": "Invalid parameters" 00:25:30.226 } 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.226 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.485 request: 00:25:30.485 { 00:25:30.485 "name": "nvme0", 00:25:30.485 "trtype": "tcp", 00:25:30.485 "traddr": "10.0.0.1", 00:25:30.485 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:30.485 "adrfam": "ipv4", 00:25:30.485 "trsvcid": "4420", 00:25:30.485 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:30.485 "dhchap_key": "key1", 00:25:30.485 "dhchap_ctrlr_key": "ckey2", 00:25:30.485 "method": "bdev_nvme_attach_controller", 00:25:30.485 "req_id": 1 00:25:30.485 } 00:25:30.485 Got JSON-RPC error response 00:25:30.485 response: 00:25:30.485 { 00:25:30.485 "code": -32602, 00:25:30.485 "message": "Invalid parameters" 00:25:30.485 } 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:30.485 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.486 rmmod nvme_tcp 00:25:30.486 rmmod nvme_fabrics 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77649 ']' 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77649 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 77649 ']' 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 77649 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77649 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77649' 00:25:30.486 killing process with pid 77649 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 77649 00:25:30.486 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 77649 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.744 09:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:30.744 09:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:31.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:31.568 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:31.568 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:31.826 09:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.79a /tmp/spdk.key-null.XzM /tmp/spdk.key-sha256.Ur9 /tmp/spdk.key-sha384.5LH /tmp/spdk.key-sha512.Rgb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:25:31.826 09:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:32.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:32.084 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:32.084 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:32.084 ************************************ 00:25:32.084 END TEST nvmf_auth_host 00:25:32.084 ************************************ 00:25:32.084 00:25:32.084 real 0m33.984s 00:25:32.084 user 0m30.355s 00:25:32.084 sys 0m4.184s 00:25:32.084 09:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:32.084 09:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.084 09:18:44 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:25:32.084 09:18:44 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:32.084 09:18:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:32.084 09:18:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:32.084 09:18:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.084 ************************************ 00:25:32.084 START TEST nvmf_digest 00:25:32.084 ************************************ 00:25:32.084 09:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:32.084 * Looking for test storage... 00:25:32.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.342 09:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:32.343 Cannot find device "nvmf_tgt_br" 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:32.343 Cannot find device "nvmf_tgt_br2" 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:32.343 Cannot find device "nvmf_tgt_br" 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:32.343 Cannot find device "nvmf_tgt_br2" 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:32.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:32.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:32.343 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:32.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:25:32.602 00:25:32.602 --- 10.0.0.2 ping statistics --- 00:25:32.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.602 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:32.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:32.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:25:32.602 00:25:32.602 --- 10.0.0.3 ping statistics --- 00:25:32.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.602 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:32.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:25:32.602 00:25:32.602 --- 10.0.0.1 ping statistics --- 00:25:32.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.602 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:32.602 ************************************ 00:25:32.602 START TEST nvmf_digest_clean 00:25:32.602 ************************************ 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79202 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79202 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 79202 ']' 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:32.602 09:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.602 [2024-05-15 09:18:45.044878] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:32.602 [2024-05-15 09:18:45.045163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.874 [2024-05-15 09:18:45.183072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.874 [2024-05-15 09:18:45.292045] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.874 [2024-05-15 09:18:45.292298] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.874 [2024-05-15 09:18:45.292419] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.874 [2024-05-15 09:18:45.292475] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.874 [2024-05-15 09:18:45.292506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.874 [2024-05-15 09:18:45.292650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.808 09:18:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:33.808 09:18:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:33.808 09:18:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.808 09:18:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:33.808 09:18:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.808 null0 00:25:33.808 [2024-05-15 09:18:46.122486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.808 [2024-05-15 09:18:46.146409] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:33.808 [2024-05-15 09:18:46.146935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79230 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79230 /var/tmp/bperf.sock 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 79230 ']' 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:33.808 09:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.808 [2024-05-15 09:18:46.201342] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:33.808 [2024-05-15 09:18:46.201970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79230 ] 00:25:34.066 [2024-05-15 09:18:46.346863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.066 [2024-05-15 09:18:46.455635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.999 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.566 nvme0n1 00:25:35.566 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:35.566 09:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:35.566 Running I/O for 2 seconds... 00:25:37.523 00:25:37.523 Latency(us) 00:25:37.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:37.523 nvme0n1 : 2.00 15654.76 61.15 0.00 0.00 8170.22 7427.41 18474.91 00:25:37.523 =================================================================================================================== 00:25:37.523 Total : 15654.76 61.15 0.00 0.00 8170.22 7427.41 18474.91 00:25:37.523 0 00:25:37.523 09:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:37.523 09:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:37.523 09:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:37.523 09:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:37.523 | select(.opcode=="crc32c") 00:25:37.523 | "\(.module_name) \(.executed)"' 00:25:37.523 09:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79230 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 79230 ']' 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 79230 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:38.089 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79230 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79230' 00:25:38.090 killing process with pid 79230 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 79230 00:25:38.090 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.090 00:25:38.090 Latency(us) 00:25:38.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.090 =================================================================================================================== 00:25:38.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 79230 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79296 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79296 /var/tmp/bperf.sock 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 79296 ']' 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:38.090 09:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.348 [2024-05-15 09:18:50.550565] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:38.348 [2024-05-15 09:18:50.550878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.348 Zero copy mechanism will not be used. 00:25:38.348 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79296 ] 00:25:38.348 [2024-05-15 09:18:50.692276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.606 [2024-05-15 09:18:50.802909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.172 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:39.172 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:39.172 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:39.172 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:39.172 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:39.431 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.431 09:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.998 nvme0n1 00:25:39.998 09:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:39.998 09:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:39.998 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.998 Zero copy mechanism will not be used. 00:25:39.998 Running I/O for 2 seconds... 00:25:42.528 00:25:42.528 Latency(us) 00:25:42.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.528 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:42.528 nvme0n1 : 2.00 8621.08 1077.64 0.00 0.00 1852.86 1654.00 2683.86 00:25:42.528 =================================================================================================================== 00:25:42.528 Total : 8621.08 1077.64 0.00 0.00 1852.86 1654.00 2683.86 00:25:42.528 0 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:42.528 | select(.opcode=="crc32c") 00:25:42.528 | "\(.module_name) \(.executed)"' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79296 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 79296 ']' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 79296 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79296 00:25:42.528 killing process with pid 79296 00:25:42.528 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.528 00:25:42.528 Latency(us) 00:25:42.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.528 =================================================================================================================== 00:25:42.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79296' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 79296 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 79296 00:25:42.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79356 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79356 /var/tmp/bperf.sock 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 79356 ']' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:42.528 09:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.528 [2024-05-15 09:18:54.948040] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:42.528 [2024-05-15 09:18:54.948786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79356 ] 00:25:42.786 [2024-05-15 09:18:55.085008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.786 [2024-05-15 09:18:55.187432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.721 09:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:43.721 09:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:43.721 09:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:43.721 09:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:43.721 09:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.980 09:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.980 09:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.239 nvme0n1 00:25:44.239 09:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:44.239 09:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:44.239 Running I/O for 2 seconds... 00:25:46.783 00:25:46.783 Latency(us) 00:25:46.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.783 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:46.783 nvme0n1 : 2.00 18317.79 71.55 0.00 0.00 6981.55 2075.31 15229.32 00:25:46.783 =================================================================================================================== 00:25:46.783 Total : 18317.79 71.55 0.00 0.00 6981.55 2075.31 15229.32 00:25:46.783 0 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:46.783 | select(.opcode=="crc32c") 00:25:46.783 | "\(.module_name) \(.executed)"' 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79356 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 79356 ']' 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 79356 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79356 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79356' 00:25:46.783 killing process with pid 79356 00:25:46.783 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 79356 00:25:46.783 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.783 00:25:46.783 Latency(us) 00:25:46.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.783 =================================================================================================================== 00:25:46.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.784 09:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 79356 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79412 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79412 /var/tmp/bperf.sock 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 79412 ']' 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:46.784 09:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:47.042 [2024-05-15 09:18:59.252116] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:47.042 [2024-05-15 09:18:59.252400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79412 ] 00:25:47.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.042 Zero copy mechanism will not be used. 00:25:47.042 [2024-05-15 09:18:59.396938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.300 [2024-05-15 09:18:59.498711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.867 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:47.867 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:47.867 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:47.867 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:47.867 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:48.127 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.127 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.703 nvme0n1 00:25:48.704 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:48.704 09:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:48.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:48.704 Zero copy mechanism will not be used. 00:25:48.704 Running I/O for 2 seconds... 00:25:50.604 00:25:50.604 Latency(us) 00:25:50.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.604 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:50.604 nvme0n1 : 2.00 7660.46 957.56 0.00 0.00 2084.02 1404.34 5710.99 00:25:50.604 =================================================================================================================== 00:25:50.604 Total : 7660.46 957.56 0.00 0.00 2084.02 1404.34 5710.99 00:25:50.862 0 00:25:50.862 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:50.862 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:50.862 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:50.862 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:50.862 | select(.opcode=="crc32c") 00:25:50.862 | "\(.module_name) \(.executed)"' 00:25:50.862 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79412 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 79412 ']' 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 79412 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79412 00:25:51.120 killing process with pid 79412 00:25:51.120 Received shutdown signal, test time was about 2.000000 seconds 00:25:51.120 00:25:51.120 Latency(us) 00:25:51.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.120 =================================================================================================================== 00:25:51.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79412' 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 79412 00:25:51.120 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 79412 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79202 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 79202 ']' 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 79202 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79202 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79202' 00:25:51.378 killing process with pid 79202 00:25:51.378 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 79202 00:25:51.378 [2024-05-15 09:19:03.662736] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 79202 00:25:51.378 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:51.637 00:25:51.637 real 0m18.902s 00:25:51.637 user 0m36.274s 00:25:51.637 sys 0m5.399s 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.637 ************************************ 00:25:51.637 END TEST nvmf_digest_clean 00:25:51.637 ************************************ 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:51.637 ************************************ 00:25:51.637 START TEST nvmf_digest_error 00:25:51.637 ************************************ 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79504 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79504 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 79504 ']' 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:51.637 09:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.637 [2024-05-15 09:19:03.993495] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:51.637 [2024-05-15 09:19:03.993794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.895 [2024-05-15 09:19:04.128055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.895 [2024-05-15 09:19:04.248241] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.895 [2024-05-15 09:19:04.248587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.895 [2024-05-15 09:19:04.248789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.895 [2024-05-15 09:19:04.249063] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.895 [2024-05-15 09:19:04.249167] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.895 [2024-05-15 09:19:04.249301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.829 09:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:52.829 09:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:52.829 09:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.829 09:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:52.829 09:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.829 [2024-05-15 09:19:05.033939] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.829 null0 00:25:52.829 [2024-05-15 09:19:05.131297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.829 [2024-05-15 09:19:05.155226] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:52.829 [2024-05-15 09:19:05.155765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79536 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79536 /var/tmp/bperf.sock 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 79536 ']' 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:52.829 09:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.829 [2024-05-15 09:19:05.216636] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:52.829 [2024-05-15 09:19:05.217058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79536 ] 00:25:53.088 [2024-05-15 09:19:05.367914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.088 [2024-05-15 09:19:05.508732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.021 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:54.021 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:54.021 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.021 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.282 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:54.282 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.282 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.282 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.282 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.282 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.540 nvme0n1 00:25:54.540 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:54.540 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.540 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.540 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.540 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:54.540 09:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:54.799 Running I/O for 2 seconds... 00:25:54.799 [2024-05-15 09:19:07.076952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.078050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.095222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.095951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.113135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.113580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.113845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.131086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.131535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.131842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.149274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.149731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.149951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.166959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.167415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.167722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.185362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.185917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.186190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.203501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.203999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.204223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.222738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.223493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.799 [2024-05-15 09:19:07.240958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:54.799 [2024-05-15 09:19:07.241393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.799 [2024-05-15 09:19:07.241676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.258967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.259434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.259736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.276988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.277477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.277838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.295159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.295664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.295907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.313283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.313744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.313964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.331126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.331690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.331930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.349481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.349959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.350226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.367837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.368312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.368535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.058 [2024-05-15 09:19:07.385761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.058 [2024-05-15 09:19:07.386206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.058 [2024-05-15 09:19:07.386432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.059 [2024-05-15 09:19:07.403748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.059 [2024-05-15 09:19:07.404236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.059 [2024-05-15 09:19:07.404470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.059 [2024-05-15 09:19:07.421699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.059 [2024-05-15 09:19:07.422164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.059 [2024-05-15 09:19:07.422461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.059 [2024-05-15 09:19:07.439757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.059 [2024-05-15 09:19:07.440256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.059 [2024-05-15 09:19:07.440479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.059 [2024-05-15 09:19:07.457713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.059 [2024-05-15 09:19:07.458128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.059 [2024-05-15 09:19:07.458364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.059 [2024-05-15 09:19:07.475418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.059 [2024-05-15 09:19:07.475879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.059 [2024-05-15 09:19:07.476142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.059 [2024-05-15 09:19:07.495018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.059 [2024-05-15 09:19:07.495607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.059 [2024-05-15 09:19:07.495991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.515012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.515609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.515963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.539401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.540435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.541048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.564877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.565447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.565974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.584936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.585452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.585704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.602760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.603233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.603461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.620404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.620873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.621120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.638198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.638860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.655980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.656343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.656581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.673845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.674320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.674522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.691638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.692135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.692342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.709473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.709954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.710236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.727422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.727897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.728141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.318 [2024-05-15 09:19:07.745291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.318 [2024-05-15 09:19:07.745781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.318 [2024-05-15 09:19:07.745985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.576 [2024-05-15 09:19:07.763474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.764012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.764236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.781596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.782049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.782316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.799568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.800051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.800293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.817341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.817817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.835126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.836501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.836803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.853828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.854301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.854509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.871680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.872168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.872375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.889611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.890048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.890349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.909710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.910284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.910658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.930032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.930508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.930868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.950436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.951221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.970698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.971300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.971846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:07.992441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:07.993046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:07.993367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.577 [2024-05-15 09:19:08.010380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.577 [2024-05-15 09:19:08.010892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.577 [2024-05-15 09:19:08.011111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.835 [2024-05-15 09:19:08.028463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.835 [2024-05-15 09:19:08.028941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.835 [2024-05-15 09:19:08.029254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.835 [2024-05-15 09:19:08.046312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.835 [2024-05-15 09:19:08.046826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.835 [2024-05-15 09:19:08.047061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.835 [2024-05-15 09:19:08.064116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.835 [2024-05-15 09:19:08.064625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.835 [2024-05-15 09:19:08.064890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.835 [2024-05-15 09:19:08.081707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.835 [2024-05-15 09:19:08.082441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.835 [2024-05-15 09:19:08.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.835 [2024-05-15 09:19:08.099729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.835 [2024-05-15 09:19:08.100248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.835 [2024-05-15 09:19:08.100529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.835 [2024-05-15 09:19:08.117486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.117963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.118203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.135060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.135511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.135861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.153612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.154107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.154432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.172129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.172667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.172993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.190254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.190791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.191069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.208088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.208537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.208873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.227397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.227977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.228272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.252953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.253464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.253737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.836 [2024-05-15 09:19:08.270841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:55.836 [2024-05-15 09:19:08.271297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.836 [2024-05-15 09:19:08.271592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.288849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.289336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.289619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.306626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.307107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.307354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.324613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.325115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.325388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.342493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.342982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.343433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.360537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.361060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.361314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.378100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.378605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.378934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.396347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.396858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.397160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.414270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.414775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.415071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.432580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.433077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.433355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.450632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.451334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.468602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.469052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.469287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.486488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.486971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.487212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.504353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.095 [2024-05-15 09:19:08.505122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.095 [2024-05-15 09:19:08.522299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.095 [2024-05-15 09:19:08.522785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.096 [2024-05-15 09:19:08.523010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.540193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.540628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.540858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.557937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.558401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.558692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.575771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.576265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.576511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.593520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.594031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.594269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.611439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.611987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.612268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.629348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.629903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.630221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.647316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.647801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.648117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.665208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.665676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.665949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.682922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.683362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.683646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.700756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.701223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.701557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.718679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.719131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.719397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.736539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.737012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.737327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.754670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.755129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.755389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.772493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.772954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.354 [2024-05-15 09:19:08.790337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.354 [2024-05-15 09:19:08.790818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.354 [2024-05-15 09:19:08.791109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.808215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.808718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.808948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.825937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.826375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.826643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.843511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.843973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.844242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.861406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.861872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.862223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.879275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.879691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.880001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.896960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.897478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.897821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.915010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.915448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.915731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.932984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.933395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.933675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.950714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.951180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.951434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.968683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.969122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.969354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:08.986598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:08.987077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:08.987360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:09.004663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:09.005153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:09.005395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:09.022738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:09.023209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:09.023596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 [2024-05-15 09:19:09.040769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2175120) 00:25:56.613 [2024-05-15 09:19:09.041270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.613 [2024-05-15 09:19:09.041531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.613 00:25:56.613 Latency(us) 00:25:56.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.613 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:56.613 nvme0n1 : 2.01 13814.21 53.96 0.00 0.00 9258.93 7989.15 36700.16 00:25:56.613 =================================================================================================================== 00:25:56.613 Total : 13814.21 53.96 0.00 0.00 9258.93 7989.15 36700.16 00:25:56.613 0 00:25:56.872 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:56.872 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:56.872 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:56.872 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:56.872 | .driver_specific 00:25:56.872 | .nvme_error 00:25:56.872 | .status_code 00:25:56.872 | .command_transient_transport_error' 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 108 > 0 )) 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79536 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 79536 ']' 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 79536 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79536 00:25:57.130 killing process with pid 79536 00:25:57.130 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.130 00:25:57.130 Latency(us) 00:25:57.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.130 =================================================================================================================== 00:25:57.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79536' 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 79536 00:25:57.130 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 79536 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79592 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79592 /var/tmp/bperf.sock 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 79592 ']' 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:57.388 09:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.388 [2024-05-15 09:19:09.710226] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:25:57.388 [2024-05-15 09:19:09.710602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79592 ] 00:25:57.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.388 Zero copy mechanism will not be used. 00:25:57.685 [2024-05-15 09:19:09.849384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.685 [2024-05-15 09:19:09.957262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.251 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:58.251 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:58.251 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.251 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.817 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:58.817 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.817 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.817 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.817 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.817 09:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.076 nvme0n1 00:25:59.076 09:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:59.076 09:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.076 09:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.076 09:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.076 09:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:59.076 09:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.076 Zero copy mechanism will not be used. 00:25:59.076 Running I/O for 2 seconds... 00:25:59.076 [2024-05-15 09:19:11.454878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.455656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.455932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.460414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.460770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.461040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.465495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.465856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.466067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.470596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.470935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.475565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.475917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.476094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.480434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.480791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.480998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.485358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.485713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.485930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.490354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.490793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.491011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.495510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.495897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.496160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.500520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.500844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.501045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.505324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.505694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.505907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.510168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.510484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.510734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.076 [2024-05-15 09:19:11.515081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.076 [2024-05-15 09:19:11.515406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.076 [2024-05-15 09:19:11.515628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.519985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.520334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.520651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.525054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.525394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.525636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.530026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.530412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.530698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.536065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.536570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.536670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.541025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.541392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.541697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.546145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.546483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.546725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.551069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.551444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.551656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.556059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.556422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.556714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.561129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.561450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.566014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.566363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.570839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.571160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.571370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.575687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.576024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.335 [2024-05-15 09:19:11.576238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.335 [2024-05-15 09:19:11.580537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.335 [2024-05-15 09:19:11.580865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.581050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.585335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.585699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.590185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.590513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.590741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.595057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.595380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.595603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.599897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.600252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.600440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.604864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.605224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.605436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.609974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.610388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.610646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.615003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.615377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.615578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.619972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.620291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.620462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.624933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.625253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.625450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.629806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.630114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.630348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.634732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.635033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.635289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.639658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.640023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.640246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.644709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.645043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.645241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.649679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.650066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.650277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.654784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.655103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.655327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.659773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.660134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.660319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.664728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.665055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.665253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.669587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.669907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.670096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.674483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.674806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.675025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.679292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.679630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.679853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.684169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.684453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.689038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.689334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.689530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.693862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.694157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.694333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.698582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.698860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.699076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.703371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.703745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.703838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.708021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.336 [2024-05-15 09:19:11.708430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.336 [2024-05-15 09:19:11.708713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.336 [2024-05-15 09:19:11.713065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.713347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.713575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.717909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.718225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.718476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.723137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.723454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.723699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.728122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.728438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.728659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.733063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.733391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.733599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.737943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.738260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.738448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.742860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.743171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.743376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.747677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.747981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.748187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.752528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.752863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.753081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.757385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.757731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.757941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.762296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.762926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.767342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.767800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.768059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.772672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.773033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.337 [2024-05-15 09:19:11.773214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.337 [2024-05-15 09:19:11.777758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.337 [2024-05-15 09:19:11.778148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.596 [2024-05-15 09:19:11.778363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.596 [2024-05-15 09:19:11.782996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.596 [2024-05-15 09:19:11.783349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.596 [2024-05-15 09:19:11.783608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.596 [2024-05-15 09:19:11.788152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.596 [2024-05-15 09:19:11.788490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.788751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.793228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.793783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.798276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.798647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.798853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.803342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.803660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.803895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.808310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.808620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.808792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.813286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.813611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.813820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.818211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.818508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.818736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.823049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.823326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.823593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.827962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.828245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.828445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.832728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.833025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.833296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.837710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.837993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.838216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.842520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.842859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.843058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.847422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.847774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.848015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.852369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.852701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.852910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.857316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.857631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.857836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.862215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.862568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.862771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.867287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.867631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.867835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.872246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.872561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.872813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.877290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.877606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.877853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.882231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.882548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.882793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.887232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.887591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.887839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.892382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.892738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.892990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.897466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.897796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.898014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.902516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.902889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.903095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.907577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.907941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.908164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.912722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.597 [2024-05-15 09:19:11.913053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.597 [2024-05-15 09:19:11.913277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.597 [2024-05-15 09:19:11.917685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.918016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.918257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.922619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.922924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.923176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.927484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.927810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.928053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.932457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.932751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.932957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.937347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.937691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.937901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.942316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.942922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.947215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.947518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.947763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.952134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.952443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.952672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.957031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.957327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.957580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.962003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.962356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.962604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.967062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.967351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.967577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.972069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.972382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.972663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.977097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.977445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.977689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.982238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.982613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.982800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.987258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.987590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.987847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.992213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.992559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.992815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:11.997125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:11.997460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:11.997740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.002007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.002337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.002515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.006761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.007051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.007253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.011545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.011892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.012204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.016536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.016903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.017137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.021476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.021856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.022126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.026692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.027016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.027235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.031545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.031903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.032101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.598 [2024-05-15 09:19:12.036409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.598 [2024-05-15 09:19:12.036750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.598 [2024-05-15 09:19:12.036949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.041363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.041701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.041896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.046325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.046678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.046891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.051283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.051674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.051915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.056397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.056805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.057127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.061488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.061855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.062048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.066282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.066651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.066822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.071069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.071392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.076005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.076339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.076563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.080776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.081108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.081329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.085495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.085808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.086005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.090209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.090575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.090777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.857 [2024-05-15 09:19:12.095189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.857 [2024-05-15 09:19:12.095610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.857 [2024-05-15 09:19:12.095781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.100054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.100386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.100648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.104982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.105313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.105524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.109834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.110141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.110341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.114723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.115057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.115225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.119452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.119831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.120109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.124417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.125041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.129410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.129761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.130039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.134333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.134742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.134982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.139311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.139662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.139852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.144229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.144555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.144768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.148988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.149438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.153814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.154096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.158717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.159044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.159232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.163421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.163737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.163939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.168333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.168702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.168980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.173400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.173800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.174000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.178317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.178637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.178833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.183018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.183345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.183547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.188000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.188323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.188492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.192952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.193310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.193530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.197953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.198352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.198628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.203020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.203387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.203567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.207871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.208244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.208534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.212873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.213332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.213672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.218061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.218409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.218705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.223307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.223709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.224025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.228353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.228744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.229006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.233306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.233663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.233887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.238274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.238655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.238894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.243332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.243709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.243932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.248328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.248735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.248951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.253251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.253645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.253920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.258256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.258576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.258867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.263252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.263649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.263943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.268363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.268701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.268930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.273259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.273621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.273840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.278146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.278479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.858 [2024-05-15 09:19:12.278745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.858 [2024-05-15 09:19:12.282974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.858 [2024-05-15 09:19:12.283297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-05-15 09:19:12.283475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-05-15 09:19:12.287804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.859 [2024-05-15 09:19:12.288148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-05-15 09:19:12.288356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.859 [2024-05-15 09:19:12.292797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.859 [2024-05-15 09:19:12.293176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-05-15 09:19:12.293353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.859 [2024-05-15 09:19:12.297889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:25:59.859 [2024-05-15 09:19:12.298248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-05-15 09:19:12.298503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.174 [2024-05-15 09:19:12.302870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.174 [2024-05-15 09:19:12.303146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.303336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.307761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.308138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.308317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.312674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.313071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.313262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.317649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.318015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.318288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.322627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.322937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.323108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.327418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.327768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.328028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.332295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.332609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.332826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.337141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.337461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.337714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.341981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.342284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.342452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.346704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.346989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.347147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.351483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.351845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.352030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.356351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.356886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.361163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.361480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.361713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.366091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.366432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.366624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.371036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.371365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.371623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.377080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.377398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.377632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.382007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.382337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.382514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.386889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.387237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.387432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.391971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.392271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.392505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.397030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.397366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.397538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.401966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.402249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.402446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.406869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.407251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.407492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.411858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.412223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.412431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.416706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.417216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.421508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.421848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.422026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.426520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.426932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.427142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.431837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.432200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.432481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.175 [2024-05-15 09:19:12.436988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.175 [2024-05-15 09:19:12.437310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.175 [2024-05-15 09:19:12.437538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.441872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.442187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.442390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.446874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.447200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.447402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.451657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.452023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.452284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.456647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.457184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.457407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.461880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.462159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.462326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.466544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.466895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.467078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.471320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.471674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.471905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.476277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.476607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.476843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.481211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.481499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.481695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.486004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.486320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.486486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.490767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.491124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.491364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.495861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.496205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.496376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.500803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.501263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.501510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.505974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.506354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.506630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.511048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.511466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.516498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.517010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.521973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.522444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.522672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.527334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.527751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.527938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.532570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.532997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.533201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.537685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.538011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.542530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.542878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.543106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.547848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.548179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.548391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.553325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.553666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.553867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.558659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.559021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.559224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.563775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.564178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.564397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.568789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.569158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.569411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.176 [2024-05-15 09:19:12.573722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.176 [2024-05-15 09:19:12.574059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.176 [2024-05-15 09:19:12.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.434 [2024-05-15 09:19:12.578695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.434 [2024-05-15 09:19:12.579072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.434 [2024-05-15 09:19:12.579275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.434 [2024-05-15 09:19:12.583600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.434 [2024-05-15 09:19:12.584017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.434 [2024-05-15 09:19:12.584287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.434 [2024-05-15 09:19:12.588531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.434 [2024-05-15 09:19:12.588885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.589070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.593378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.593686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.593884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.598176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.598473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.598767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.603037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.603339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.603645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.607969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.608351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.608584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.612933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.613316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.613535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.617874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.618243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.618408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.622724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.623048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.623250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.627441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.627756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.627982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.632349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.632664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.632868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.637108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.637402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.637736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.642018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.642314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.642605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.646867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.647158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.647351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.651680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.652000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.652248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.656547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.656894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.657085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.661428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.661773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.662018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.666250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.666554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.666784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.671076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.671360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.671641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.675988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.676293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.676534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.680954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.681282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.686360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.686702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.686969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.691922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.692301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.692518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.697166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.697582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.697801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.702234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.702630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.702890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.707197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.707582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.707799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.712146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.712458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.712770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.717078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.717465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.717759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.722056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.435 [2024-05-15 09:19:12.722399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.435 [2024-05-15 09:19:12.722717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.435 [2024-05-15 09:19:12.727026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.727358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.731998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.732334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.737003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.737325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.737614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.742247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.742587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.742816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.747262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.747592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.747974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.754082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.754581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.754910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.760734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.761161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.761437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.766517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.766852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.767026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.771484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.771882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.772068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.776353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.776686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.776901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.781202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.781488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.781681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.785960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.786237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.786435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.790763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.791073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.791269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.795461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.796040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.800241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.800853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.805086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.805410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.805620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.809814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.810124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.814767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.815164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.815333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.819695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.820129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.820306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.824602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.824907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.825095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.829272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.829639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.829841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.834038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.834344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.834621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.838839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.839148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.839342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.843523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.843861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.844120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.848324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.848695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.848904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.853211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.853530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.853741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.857977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.858293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.858537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.862791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.436 [2024-05-15 09:19:12.863098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.436 [2024-05-15 09:19:12.863301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.436 [2024-05-15 09:19:12.867508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.437 [2024-05-15 09:19:12.867871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.437 [2024-05-15 09:19:12.868090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.437 [2024-05-15 09:19:12.872665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.437 [2024-05-15 09:19:12.872996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.437 [2024-05-15 09:19:12.873247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.878266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.878789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.879081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.884768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.885286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.885607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.891384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.891912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.892210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.896738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.897040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.897261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.901650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.901967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.902176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.906772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.907247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.907498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.912320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.912992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.913274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.918255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.918750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.918988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.923679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.924117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.924471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.929528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.695 [2024-05-15 09:19:12.929895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.695 [2024-05-15 09:19:12.930180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.695 [2024-05-15 09:19:12.934662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.934982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.935178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.939525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.939895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.940125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.944570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.944888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.945114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.949505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.949907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.954553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.954875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.955121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.959459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.959803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.960079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.964517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.964866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.965082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.969455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.969845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.970085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.974459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.974769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.974990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.979234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.979548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.979776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.984134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.984447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.984679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.989163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.989515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.989776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.994243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.994635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.994965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:12.999369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:12.999722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:12.999964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.004642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.005025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.005277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.009789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.010202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.010475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.014934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.015412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.015622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.020014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.020386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.025112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.025471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.025745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.030924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.031409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.031692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.036492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.036915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.037178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.042024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.042397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.042719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.047732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.048160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.048437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.053295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.053752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.054000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.058826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.059270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.059705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.064642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.065043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.065298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.070152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.070582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.070851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.075892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.696 [2024-05-15 09:19:13.076318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.696 [2024-05-15 09:19:13.076664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.696 [2024-05-15 09:19:13.081744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.082169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.082527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.087509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.087930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.088168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.092663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.092991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.093215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.097587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.097915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.098120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.102500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.102837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.103062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.107464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.107843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.108077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.112637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.113007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.113278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.117739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.118168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.118360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.122829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.123210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.123512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.127941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.128283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.128468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.697 [2024-05-15 09:19:13.132965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.697 [2024-05-15 09:19:13.133301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.697 [2024-05-15 09:19:13.133470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.138018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.138374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.138717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.143160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.143515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.143711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.148149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.148474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.148760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.153277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.153604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.153883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.158255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.158638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.158892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.163372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.163757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.163968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.168424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.168773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.169003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.173583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.173958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.174249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.178759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.179078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.179400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.184010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.954 [2024-05-15 09:19:13.184358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.954 [2024-05-15 09:19:13.184664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.954 [2024-05-15 09:19:13.188977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.189348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.189587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.194131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.194743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.199172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.199536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.199796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.204172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.204478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.204728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.209215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.209529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.209804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.214254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.214830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.219812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.220256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.220623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.225176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.225536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.225771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.230283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.230730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.231010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.235508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.235945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.236163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.240684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.241062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.241253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.245710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.246079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.246365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.250971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.251402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.251731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.256477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.256828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.257013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.261433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.261865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.262043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.266579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.267312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.271994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.272465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.272681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.277617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.278062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.278277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.282761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.283166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.283404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.287913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.288259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.288610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.292980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.293337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.293596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.298029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.298382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.298685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.303048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.303423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.303615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.308100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.308469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.308680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.312981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.313344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.313618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.317887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.318225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.318442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.322891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.323267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.323555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.328039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.955 [2024-05-15 09:19:13.328430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.955 [2024-05-15 09:19:13.328732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.955 [2024-05-15 09:19:13.333119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.333581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.333848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.338343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.338777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.339014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.343309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.343710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.344009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.348382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.348729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.348981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.353273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.353604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.353872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.358208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.358622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.358855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.363289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.363681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.363907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.368318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.368779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.369181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.373591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.373904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.374169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.378358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.378694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.378874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.383069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.383340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.387704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.387986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.388155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.392289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.392569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.956 [2024-05-15 09:19:13.392875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.956 [2024-05-15 09:19:13.397203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:00.956 [2024-05-15 09:19:13.397493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.397778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.402061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.402377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.402583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.406863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.407155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.407364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.411663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.411970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.412147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.416533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.416953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.417197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.421740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.422143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.422409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.426978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.427440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.427646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.432112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.432502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.437420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.437862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.438061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.214 [2024-05-15 09:19:13.442468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dd8650) 00:26:01.214 [2024-05-15 09:19:13.442871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.214 [2024-05-15 09:19:13.443132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.214 00:26:01.214 Latency(us) 00:26:01.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.214 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:01.214 nvme0n1 : 2.00 6138.91 767.36 0.00 0.00 2602.52 1895.86 9736.78 00:26:01.214 =================================================================================================================== 00:26:01.214 Total : 6138.91 767.36 0.00 0.00 2602.52 1895.86 9736.78 00:26:01.214 0 00:26:01.214 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:01.214 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:01.214 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:01.214 | .driver_specific 00:26:01.214 | .nvme_error 00:26:01.214 | .status_code 00:26:01.214 | .command_transient_transport_error' 00:26:01.214 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79592 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 79592 ']' 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 79592 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79592 00:26:01.472 killing process with pid 79592 00:26:01.472 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.472 00:26:01.472 Latency(us) 00:26:01.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.472 =================================================================================================================== 00:26:01.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79592' 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 79592 00:26:01.472 09:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 79592 00:26:01.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79658 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79658 /var/tmp/bperf.sock 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 79658 ']' 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:01.730 09:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:01.730 [2024-05-15 09:19:14.091310] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:26:01.730 [2024-05-15 09:19:14.092156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79658 ] 00:26:01.988 [2024-05-15 09:19:14.225968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.988 [2024-05-15 09:19:14.332681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.920 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.190 nvme0n1 00:26:03.190 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:03.190 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:03.190 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.190 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:03.190 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:03.190 09:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.447 Running I/O for 2 seconds... 00:26:03.447 [2024-05-15 09:19:15.785163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fef90 00:26:03.447 [2024-05-15 09:19:15.787996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.788539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.447 [2024-05-15 09:19:15.802319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190feb58 00:26:03.447 [2024-05-15 09:19:15.805134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.805343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:03.447 [2024-05-15 09:19:15.818860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fe2e8 00:26:03.447 [2024-05-15 09:19:15.821583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.821898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:03.447 [2024-05-15 09:19:15.835386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fda78 00:26:03.447 [2024-05-15 09:19:15.838089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.838383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:03.447 [2024-05-15 09:19:15.851915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fd208 00:26:03.447 [2024-05-15 09:19:15.854490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.854793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:03.447 [2024-05-15 09:19:15.868444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fc998 00:26:03.447 [2024-05-15 09:19:15.871119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.871427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:03.447 [2024-05-15 09:19:15.885032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fc128 00:26:03.447 [2024-05-15 09:19:15.887649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.447 [2024-05-15 09:19:15.887945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:03.705 [2024-05-15 09:19:15.901516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fb8b8 00:26:03.705 [2024-05-15 09:19:15.904114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.705 [2024-05-15 09:19:15.904399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:03.705 [2024-05-15 09:19:15.918026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fb048 00:26:03.705 [2024-05-15 09:19:15.920643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:15.920956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:15.934726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fa7d8 00:26:03.706 [2024-05-15 09:19:15.937252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:15.937585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:15.951380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f9f68 00:26:03.706 [2024-05-15 09:19:15.953950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:15.954282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:15.968226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f96f8 00:26:03.706 [2024-05-15 09:19:15.970787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:15.971115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:15.985043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f8e88 00:26:03.706 [2024-05-15 09:19:15.987566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:15.987839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.001573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f8618 00:26:03.706 [2024-05-15 09:19:16.004018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.004316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.018061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f7da8 00:26:03.706 [2024-05-15 09:19:16.020486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.020779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.034430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f7538 00:26:03.706 [2024-05-15 09:19:16.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.037229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.051307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f6cc8 00:26:03.706 [2024-05-15 09:19:16.053685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.053987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.067847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f6458 00:26:03.706 [2024-05-15 09:19:16.070227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.070522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.084365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f5be8 00:26:03.706 [2024-05-15 09:19:16.086753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.087070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.100695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f5378 00:26:03.706 [2024-05-15 09:19:16.103002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.103280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.117116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f4b08 00:26:03.706 [2024-05-15 09:19:16.119396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.119696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:03.706 [2024-05-15 09:19:16.133540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f4298 00:26:03.706 [2024-05-15 09:19:16.135973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.706 [2024-05-15 09:19:16.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:03.963 [2024-05-15 09:19:16.150478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f3a28 00:26:03.963 [2024-05-15 09:19:16.152841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.963 [2024-05-15 09:19:16.153229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:03.963 [2024-05-15 09:19:16.167278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f31b8 00:26:03.963 [2024-05-15 09:19:16.169628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.963 [2024-05-15 09:19:16.169935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.963 [2024-05-15 09:19:16.183808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f2948 00:26:03.963 [2024-05-15 09:19:16.186080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.963 [2024-05-15 09:19:16.186382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:03.963 [2024-05-15 09:19:16.200312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f20d8 00:26:03.963 [2024-05-15 09:19:16.202568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.963 [2024-05-15 09:19:16.202860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:03.963 [2024-05-15 09:19:16.216821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f1868 00:26:03.963 [2024-05-15 09:19:16.219035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.963 [2024-05-15 09:19:16.219304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:03.963 [2024-05-15 09:19:16.233324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f0ff8 00:26:03.963 [2024-05-15 09:19:16.235564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.235916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.250064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f0788 00:26:03.964 [2024-05-15 09:19:16.252271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.252616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.266892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eff18 00:26:03.964 [2024-05-15 09:19:16.269098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.269417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.283491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ef6a8 00:26:03.964 [2024-05-15 09:19:16.285670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.285966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.299906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eee38 00:26:03.964 [2024-05-15 09:19:16.302054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.302375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.316318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ee5c8 00:26:03.964 [2024-05-15 09:19:16.318477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.318823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.333134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190edd58 00:26:03.964 [2024-05-15 09:19:16.335159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.335468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.349678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ed4e8 00:26:03.964 [2024-05-15 09:19:16.351754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.352115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.365891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ecc78 00:26:03.964 [2024-05-15 09:19:16.367938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.368252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.381635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ec408 00:26:03.964 [2024-05-15 09:19:16.383637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.383944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:03.964 [2024-05-15 09:19:16.397675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ebb98 00:26:03.964 [2024-05-15 09:19:16.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.964 [2024-05-15 09:19:16.400023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:04.221 [2024-05-15 09:19:16.413870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eb328 00:26:04.221 [2024-05-15 09:19:16.415789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.221 [2024-05-15 09:19:16.416099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:04.221 [2024-05-15 09:19:16.429884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eaab8 00:26:04.221 [2024-05-15 09:19:16.431759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.221 [2024-05-15 09:19:16.432084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:04.221 [2024-05-15 09:19:16.445731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ea248 00:26:04.221 [2024-05-15 09:19:16.447689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.221 [2024-05-15 09:19:16.448022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:04.221 [2024-05-15 09:19:16.462218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e99d8 00:26:04.221 [2024-05-15 09:19:16.464217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.221 [2024-05-15 09:19:16.464509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:04.221 [2024-05-15 09:19:16.478558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e9168 00:26:04.222 [2024-05-15 09:19:16.480459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.480732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.494763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e88f8 00:26:04.222 [2024-05-15 09:19:16.496657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.496967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.511237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e8088 00:26:04.222 [2024-05-15 09:19:16.513162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.513436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.527294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e7818 00:26:04.222 [2024-05-15 09:19:16.529211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.529482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.543170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e6fa8 00:26:04.222 [2024-05-15 09:19:16.544989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.545260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.559009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e6738 00:26:04.222 [2024-05-15 09:19:16.560807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.561082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.574823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e5ec8 00:26:04.222 [2024-05-15 09:19:16.576557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.576837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.590123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e5658 00:26:04.222 [2024-05-15 09:19:16.591788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.592103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.605997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e4de8 00:26:04.222 [2024-05-15 09:19:16.607700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.608032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.622156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e4578 00:26:04.222 [2024-05-15 09:19:16.623922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.624196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.638455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e3d08 00:26:04.222 [2024-05-15 09:19:16.640284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.640694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:04.222 [2024-05-15 09:19:16.654832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e3498 00:26:04.222 [2024-05-15 09:19:16.656538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.222 [2024-05-15 09:19:16.656855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.671029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e2c28 00:26:04.479 [2024-05-15 09:19:16.672706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.672981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.686791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e23b8 00:26:04.479 [2024-05-15 09:19:16.688473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.688790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.703051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e1b48 00:26:04.479 [2024-05-15 09:19:16.704814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.705103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.719580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e12d8 00:26:04.479 [2024-05-15 09:19:16.721259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.721596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.735830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e0a68 00:26:04.479 [2024-05-15 09:19:16.737499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.737800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.751759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e01f8 00:26:04.479 [2024-05-15 09:19:16.753335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.753658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.767958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190df988 00:26:04.479 [2024-05-15 09:19:16.769504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.769818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.784278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190df118 00:26:04.479 [2024-05-15 09:19:16.785828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.786110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.800349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190de8a8 00:26:04.479 [2024-05-15 09:19:16.801868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.816221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190de038 00:26:04.479 [2024-05-15 09:19:16.817706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.817991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.838700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190de038 00:26:04.479 [2024-05-15 09:19:16.841443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.841799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.855031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190de8a8 00:26:04.479 [2024-05-15 09:19:16.857731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.871502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190df118 00:26:04.479 [2024-05-15 09:19:16.874074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.874425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.887781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190df988 00:26:04.479 [2024-05-15 09:19:16.890442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.890742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.904182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e01f8 00:26:04.479 [2024-05-15 09:19:16.906787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.479 [2024-05-15 09:19:16.907103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:04.479 [2024-05-15 09:19:16.920635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e0a68 00:26:04.737 [2024-05-15 09:19:16.923197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:16.923630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:16.937155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e12d8 00:26:04.737 [2024-05-15 09:19:16.939783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:16.940115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:16.953747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e1b48 00:26:04.737 [2024-05-15 09:19:16.956292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:16.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:16.970378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e23b8 00:26:04.737 [2024-05-15 09:19:16.973016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:16.973437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:16.987478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e2c28 00:26:04.737 [2024-05-15 09:19:16.990125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:16.990458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.004387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e3498 00:26:04.737 [2024-05-15 09:19:17.006969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.007375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.021268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e3d08 00:26:04.737 [2024-05-15 09:19:17.023807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.024235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.038254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e4578 00:26:04.737 [2024-05-15 09:19:17.040808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.041115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.054890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e4de8 00:26:04.737 [2024-05-15 09:19:17.057454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.057809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.071869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e5658 00:26:04.737 [2024-05-15 09:19:17.074357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.074742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.088769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e5ec8 00:26:04.737 [2024-05-15 09:19:17.091262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.091592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.105739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e6738 00:26:04.737 [2024-05-15 09:19:17.108212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.108497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.122565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e6fa8 00:26:04.737 [2024-05-15 09:19:17.125064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.125396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.139449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e7818 00:26:04.737 [2024-05-15 09:19:17.141937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.142261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.156463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e8088 00:26:04.737 [2024-05-15 09:19:17.158825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.159127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:04.737 [2024-05-15 09:19:17.173192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e88f8 00:26:04.737 [2024-05-15 09:19:17.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.737 [2024-05-15 09:19:17.175875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:04.993 [2024-05-15 09:19:17.189894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e9168 00:26:04.993 [2024-05-15 09:19:17.192283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.993 [2024-05-15 09:19:17.192669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:04.993 [2024-05-15 09:19:17.206736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190e99d8 00:26:04.993 [2024-05-15 09:19:17.209044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.993 [2024-05-15 09:19:17.209334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:04.993 [2024-05-15 09:19:17.223301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ea248 00:26:04.993 [2024-05-15 09:19:17.225538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.993 [2024-05-15 09:19:17.225807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:04.993 [2024-05-15 09:19:17.239691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eaab8 00:26:04.993 [2024-05-15 09:19:17.241995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.993 [2024-05-15 09:19:17.242241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:04.993 [2024-05-15 09:19:17.256147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eb328 00:26:04.993 [2024-05-15 09:19:17.258390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.993 [2024-05-15 09:19:17.258714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:04.993 [2024-05-15 09:19:17.272938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ebb98 00:26:04.993 [2024-05-15 09:19:17.275152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.993 [2024-05-15 09:19:17.275471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.289657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ec408 00:26:04.994 [2024-05-15 09:19:17.291811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.292133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.306298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ecc78 00:26:04.994 [2024-05-15 09:19:17.308439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.308728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.322833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ed4e8 00:26:04.994 [2024-05-15 09:19:17.325026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.325347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.339411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190edd58 00:26:04.994 [2024-05-15 09:19:17.341606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.341914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.356172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ee5c8 00:26:04.994 [2024-05-15 09:19:17.358309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.358662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.372820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eee38 00:26:04.994 [2024-05-15 09:19:17.374936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.375252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.389256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190ef6a8 00:26:04.994 [2024-05-15 09:19:17.391356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.391689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.405811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190eff18 00:26:04.994 [2024-05-15 09:19:17.407920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.408289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:04.994 [2024-05-15 09:19:17.422667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f0788 00:26:04.994 [2024-05-15 09:19:17.424768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.994 [2024-05-15 09:19:17.425074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.439379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f0ff8 00:26:05.250 [2024-05-15 09:19:17.441435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.441759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.455977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f1868 00:26:05.250 [2024-05-15 09:19:17.457978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.458267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.472414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f20d8 00:26:05.250 [2024-05-15 09:19:17.474452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.474858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.489376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f2948 00:26:05.250 [2024-05-15 09:19:17.491381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.491811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.506073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f31b8 00:26:05.250 [2024-05-15 09:19:17.508050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.508401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.522555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f3a28 00:26:05.250 [2024-05-15 09:19:17.524474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.524819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.539189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f4298 00:26:05.250 [2024-05-15 09:19:17.541113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.541415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.555747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f4b08 00:26:05.250 [2024-05-15 09:19:17.557683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.558002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.572452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f5378 00:26:05.250 [2024-05-15 09:19:17.574378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.589387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f5be8 00:26:05.250 [2024-05-15 09:19:17.591277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.591663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.606665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f6458 00:26:05.250 [2024-05-15 09:19:17.608539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.608894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.623484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f6cc8 00:26:05.250 [2024-05-15 09:19:17.625384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.625753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.640427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f7538 00:26:05.250 [2024-05-15 09:19:17.642252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.642630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.657208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f7da8 00:26:05.250 [2024-05-15 09:19:17.659042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.659428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.674316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f8618 00:26:05.250 [2024-05-15 09:19:17.676133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.676476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:05.250 [2024-05-15 09:19:17.691093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f8e88 00:26:05.250 [2024-05-15 09:19:17.692928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.250 [2024-05-15 09:19:17.693239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:05.527 [2024-05-15 09:19:17.708022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f96f8 00:26:05.527 [2024-05-15 09:19:17.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.527 [2024-05-15 09:19:17.710087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:05.527 [2024-05-15 09:19:17.724688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190f9f68 00:26:05.527 [2024-05-15 09:19:17.726447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.527 [2024-05-15 09:19:17.726820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:05.527 [2024-05-15 09:19:17.741505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fa7d8 00:26:05.527 [2024-05-15 09:19:17.743249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.527 [2024-05-15 09:19:17.743633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:05.527 [2024-05-15 09:19:17.758405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d120) with pdu=0x2000190fb048 00:26:05.527 [2024-05-15 09:19:17.760193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.527 [2024-05-15 09:19:17.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:05.527 00:26:05.527 Latency(us) 00:26:05.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.527 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:05.527 nvme0n1 : 2.00 15294.12 59.74 0.00 0.00 8362.39 6553.60 32206.26 00:26:05.527 =================================================================================================================== 00:26:05.527 Total : 15294.12 59.74 0.00 0.00 8362.39 6553.60 32206.26 00:26:05.527 0 00:26:05.527 09:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:05.527 09:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:05.527 | .driver_specific 00:26:05.527 | .nvme_error 00:26:05.527 | .status_code 00:26:05.527 | .command_transient_transport_error' 00:26:05.527 09:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:05.527 09:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79658 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 79658 ']' 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 79658 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79658 00:26:05.785 killing process with pid 79658 00:26:05.785 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.785 00:26:05.785 Latency(us) 00:26:05.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.785 =================================================================================================================== 00:26:05.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79658' 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 79658 00:26:05.785 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 79658 00:26:06.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.042 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:06.042 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:06.042 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:06.042 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:06.042 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:06.042 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79717 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79717 /var/tmp/bperf.sock 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 79717 ']' 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:06.043 09:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.043 [2024-05-15 09:19:18.426704] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:26:06.043 [2024-05-15 09:19:18.427418] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79717 ] 00:26:06.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.043 Zero copy mechanism will not be used. 00:26:06.300 [2024-05-15 09:19:18.566575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.300 [2024-05-15 09:19:18.676840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.233 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:07.233 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:26:07.233 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:07.233 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:07.491 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:07.491 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.491 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:07.491 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.491 09:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.749 nvme0n1 00:26:07.749 09:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:07.750 09:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.750 09:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.750 09:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:07.750 09:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:07.750 09:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.008 Zero copy mechanism will not be used. 00:26:08.008 Running I/O for 2 seconds... 00:26:08.008 [2024-05-15 09:19:20.208527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.209950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.210512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.214925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.215417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.215776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.220386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.220831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.221180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.225567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.225981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.226298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.230889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.231410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.231898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.236452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.236881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.237163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.241660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.242062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.242342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.246778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.247143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.247522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.251954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.252329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.252708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.257193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.257678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.257964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.262319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.262679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.263025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.267450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.267893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.268226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.272118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.272487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.272783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.277382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.277989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.278274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.282414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.282767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.283033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.287413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.287842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.288118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.292623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.293040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.293318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.297577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.297951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.298227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.302804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.303194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.303530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.307894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.308229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.308531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.312812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.313167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.313507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.318015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.009 [2024-05-15 09:19:20.318505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 09:19:20.318826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 09:19:20.323186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.323560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.323847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.328357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.328698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.328971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.333093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.333462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.333832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.338107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.338462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.338754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.343173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.343508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.343852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.348224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.348600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.353515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.353913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.354188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.358514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.358945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.359249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.363650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.364024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.364290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.368562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.368943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.369263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.373298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.373724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.373990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.377744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.378197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.378486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.382269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.382673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.382963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.386739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.387212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.387531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.391466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.391872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.392173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.396272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.396638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.396909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.400879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.401441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.401752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.405401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.405780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.406084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.409921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.410370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.410673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.414619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.415035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.415356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.419261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.419950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.420242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.424199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.424563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.424842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.429276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.429773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.430208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.435155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.435678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.436110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.440661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.441068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.441397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 09:19:20.445745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.010 [2024-05-15 09:19:20.446149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 09:19:20.446439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.451347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.452090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.452429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.456862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.457323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.457663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.462212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.462995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.467567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.467938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.468249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.472582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.472916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.473262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.477628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.478006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.478275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.482644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.483032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.483311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.301 [2024-05-15 09:19:20.487752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.301 [2024-05-15 09:19:20.488144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.301 [2024-05-15 09:19:20.488430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.492824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.493212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.493556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.497822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.498226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.498527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.502632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.503016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.503324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.507761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.508169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.508449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.512896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.513353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.513669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.518111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.518645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.518939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.523202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.523664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.523940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.528035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.528609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.528913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.532882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.533452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.533773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.538098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.538515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.538818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.543285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.543748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.544077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.548608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.548978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.553774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.554197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.554494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.558883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.559297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.559658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.563756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.564257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.564589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.568801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.569265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.569587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.574021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.574584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.574880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.579214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.579772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.580101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.584574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.585060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.585358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.589449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.590108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.590414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.594298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.594710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.595014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.599497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.599928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.600211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.604631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.605009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.605287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.609576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.610295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.614671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.615333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.619693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.620079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.620448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.625011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.625397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.625814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.302 [2024-05-15 09:19:20.630322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.302 [2024-05-15 09:19:20.630752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.302 [2024-05-15 09:19:20.631095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.635568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.635930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.636256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.640625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.640993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.641289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.645637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.646041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.646345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.650727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.651115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.651381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.655696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.656035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.660810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.661195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.661499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.665978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.666403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.666785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.671199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.671633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.671951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.676361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.676761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.677101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.681467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.681856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.682169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.686523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.686861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.687195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.691532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.691972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.692279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.696748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.697103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.697387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.701920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.702320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.706970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.707321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.707663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.712791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.713378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.713831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.303 [2024-05-15 09:19:20.717777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.303 [2024-05-15 09:19:20.718171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.303 [2024-05-15 09:19:20.718461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.722268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.722934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.723261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.727117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.727598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.727935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.731882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.732359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.732653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.736570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.737081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.737364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.741632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.742007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.742403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.746847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.747196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.747509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.751914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.752270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.756936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.757324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.757627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.762106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.762509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.762835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.767158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.767507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.767880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.589 [2024-05-15 09:19:20.772287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.589 [2024-05-15 09:19:20.772706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.589 [2024-05-15 09:19:20.773001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.777527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.777936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.782691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.783055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.783333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.787933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.788309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.788687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.793057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.793490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.793788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.798045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.798459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.798762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.802300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.802689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.802979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.807102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.807523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.807861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.812164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.812531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.812877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.817254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.817673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.817927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.822476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.822836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.823114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.827475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.827906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.828175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.832566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.832906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.833182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.837521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.837864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.838152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.842755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.843154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.843494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.847908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.848639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.852990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.853696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.858020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.858415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.858774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.863147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.863491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.863799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.868161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.868559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.868851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.873226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.873641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.873915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.878350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.878756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.879081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.883531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.883952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.884215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.888631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.889000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.889290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.893685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.894082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.894355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.899120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.899659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.900018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.904973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.905378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.905725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.910981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.911399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.911774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.590 [2024-05-15 09:19:20.916415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.590 [2024-05-15 09:19:20.916856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.590 [2024-05-15 09:19:20.917156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.921660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.922044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.922509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.927143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.927651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.928109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.932292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.932700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.932983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.937464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.937914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.938241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.942684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.943077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.943419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.947905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.948299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.948644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.953078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.953500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.958462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.958984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.959352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.963695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.964747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.968807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.969310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.969676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.974271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.974778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.975104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.979627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.980063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.980311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.984973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.985383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.990186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.990729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.991036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:20.995459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:20.995990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:20.996273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.000581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.000995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.001261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.005751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.006267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.006615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.010576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.010942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.011224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.015583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.015960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.016289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.020895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.021252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.021586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.026074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.026503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.026795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.591 [2024-05-15 09:19:21.031164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.591 [2024-05-15 09:19:21.031601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.591 [2024-05-15 09:19:21.031900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.036352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.036751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.037060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.041534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.041930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.042228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.046611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.047062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.051712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.052240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.052601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.056931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.057375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.057703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.061613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.062142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.062432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.066576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.067134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.067418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.071535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.072007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.072326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.076157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.076720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.077051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.080893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.081314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.081601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.085519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.085927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.086198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.090286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.090915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.091216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.095050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.095448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.100230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.851 [2024-05-15 09:19:21.100702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.851 [2024-05-15 09:19:21.100973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.851 [2024-05-15 09:19:21.105475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.105954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.106219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.110645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.111070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.111364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.115944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.116353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.116689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.121381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.121905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.122185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.126841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.127310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.127645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.132103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.132629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.132968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.137102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.137768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.138075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.141950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.142395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.147406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.147855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.148188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.152812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.153186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.153518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.158150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.158535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.158855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.163322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.163748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.164082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.168745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.169437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.173975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.174407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.174772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.179281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.179675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.179929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.184351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.184652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.184837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.189164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.189711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.194140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.194447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.194653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.199069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.199391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.199616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.204180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.204501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.204706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.209185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.209483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.209681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.214085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.214406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.214603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.218631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.219060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.219262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.223101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.223379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.223743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.228061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.228418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.228693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.233079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.233422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.233650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.238254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.852 [2024-05-15 09:19:21.238690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.852 [2024-05-15 09:19:21.238882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.852 [2024-05-15 09:19:21.243655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.244093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.244347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.248847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.249147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.249355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.253869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.254167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.254428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.258749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.259301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.263373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.263853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.267628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.267935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.268223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.272438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.272756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.273048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.277277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.277761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.282159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.282464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.282708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.287042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.287356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.287576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.853 [2024-05-15 09:19:21.292059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:08.853 [2024-05-15 09:19:21.292398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.853 [2024-05-15 09:19:21.292826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.297217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.297555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.297742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.302264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.302605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.302790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.306830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.307254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.307477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.311154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.311428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.311680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.316065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.316335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.316580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.320940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.321239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.321288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.325786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.326074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.326276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.330742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.331031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.331225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.335613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.335965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.336194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.112 [2024-05-15 09:19:21.340661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.112 [2024-05-15 09:19:21.340956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.112 [2024-05-15 09:19:21.341225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.345664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.345965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.346223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.350636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.351007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.351223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.355288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.355780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.355995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.359750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.360239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.364123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.364419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.364619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.368382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.368697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.368944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.372888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.373234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.373517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.377446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.377913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.378088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.381812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.382093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.382346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.386701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.386983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.387173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.391590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.391892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.392160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.396404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.396734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.396938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.401248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.401583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.401763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.406412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.406715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.406900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.411328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.411653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.411889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.416049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.416608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.416862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.421488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.421808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.422054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.425929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.426227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.426423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.430278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.430603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.430788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.434600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.434884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.435070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.439299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.439596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.439841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.444189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.444457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.444664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.448741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.449202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.449407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.453315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.453632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.453821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.458200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.458562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.458756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.463197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.463557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.463744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.469457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.113 [2024-05-15 09:19:21.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.113 [2024-05-15 09:19:21.470112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.113 [2024-05-15 09:19:21.476073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.476411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.476815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.482256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.482595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.482887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.488063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.488375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.488739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.493817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.494120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.494386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.500238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.500611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.500837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.505976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.506312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.506604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.511662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.512016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.512208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.516781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.517243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.517427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.521746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.522308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.522503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.527243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.527567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.527791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.532968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.533340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.533563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.538499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.538901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.539128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.544273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.544692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.544908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.114 [2024-05-15 09:19:21.550059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.114 [2024-05-15 09:19:21.550437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.114 [2024-05-15 09:19:21.550646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.555788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.556173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.556412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.561421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.561788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.562019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.566979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.567292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.567522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.572590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.572939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.573130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.578253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.578591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.578782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.583902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.584228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.584448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.589395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.589748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.589987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.594874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.595201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.595394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.600393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.600694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.600880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.606133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.606473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.606787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.611621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.612119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.612330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.616585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.616868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.617069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.621950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.622233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.622424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.627389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.627709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.627925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.632965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.633287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.633478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.638576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.374 [2024-05-15 09:19:21.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.374 [2024-05-15 09:19:21.639099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.374 [2024-05-15 09:19:21.644232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.644638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.644991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.650051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.650446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.650661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.655787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.656168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.656741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.661901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.662226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.662451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.667487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.667841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.668269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.675291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.675671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.676062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.681112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.681502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.681958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.686978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.687319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.687515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.692603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.692963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.693156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.698154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.698507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.698739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.703714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.704099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.704296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.709304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.709664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.709910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.715021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.715368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.715812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.720758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.721120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.721313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.726442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.726833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.727025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.731632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.732140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.732356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.736727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.737042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.737303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.741859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.742193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.746746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.747091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.747319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.751662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.752052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.752239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.756562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.756868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.757091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.761370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.761922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.766326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.766662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.766849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.375 [2024-05-15 09:19:21.771148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.375 [2024-05-15 09:19:21.771466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.375 [2024-05-15 09:19:21.771710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.775963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.776409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.776654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.780784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.781075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.781296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.785670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.785997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.786188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.790487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.790861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.791058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.795189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.795513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.795763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.800093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.800412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.800632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.804985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.805305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.805500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.809864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.810187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.810370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.376 [2024-05-15 09:19:21.814671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.376 [2024-05-15 09:19:21.815135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.376 [2024-05-15 09:19:21.815392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.819728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.820074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.820323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.824687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.825090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.825300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.830084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.830516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.830773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.835204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.835741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.835969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.840229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.840633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.840840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.845337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.845706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.845891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.851100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.851452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.851755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.856994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.857323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.857537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.862814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.863172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.863378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.868613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.868938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.869130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.874348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.874729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.874918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.880187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.880537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.880747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.885985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.886324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.886588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.891781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.892149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.892401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.897776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.898199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.898427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.903923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.904356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.904569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.909823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.910303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.910498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.915147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.915605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.915813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.921099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.921384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.921622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.927101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.927418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.927692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.932739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.933075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.933269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.938669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.939039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.939252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.945338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.945938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.950959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.951308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.951522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.636 [2024-05-15 09:19:21.956868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.636 [2024-05-15 09:19:21.957198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.636 [2024-05-15 09:19:21.957451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.962696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.963004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.963233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.968523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.968873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.969170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.974114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.974428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.974663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.979530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.979850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.980180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.985168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.985487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.985798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.990703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.991034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.991266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:21.996635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:21.996941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:21.997154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.002176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.002498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.002758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.007802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.008149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.008365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.013387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.013774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.014215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.019371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.019725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.020039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.024990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.025371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.025612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.030562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.031048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.031362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.035514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.035861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.036097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.041159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.041513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.041767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.046789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.047129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.047380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.052400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.052826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.053039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.058214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.058659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.058889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.064181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.064554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.064786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.070382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.070874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.071152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.637 [2024-05-15 09:19:22.077392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.637 [2024-05-15 09:19:22.077806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.637 [2024-05-15 09:19:22.078287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.083641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.084127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.084354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.089744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.090087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.090311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.095647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.095992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.096214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.101468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.101834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.102080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.107057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.107415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.107648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.112726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.113052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.113257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.118235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.118597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.118802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.123902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.124202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.124406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.129468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.129788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.130106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.135128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.135424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.135700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.140664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.141010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.141229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.145770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.896 [2024-05-15 09:19:22.146251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.896 [2024-05-15 09:19:22.146616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.896 [2024-05-15 09:19:22.150836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.151123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.151350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.156053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.156348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.156576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.161737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.162043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.162243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.167224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.167500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.167733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.172841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.173143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.173376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.178435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.178984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.184439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.184857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.185073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.190170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.190489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.190721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.897 [2024-05-15 09:19:22.196925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x176d2c0) with pdu=0x2000190fef90 00:26:09.897 [2024-05-15 09:19:22.197309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.897 [2024-05-15 09:19:22.197521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.897 00:26:09.897 Latency(us) 00:26:09.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.897 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:09.897 nvme0n1 : 2.00 5899.95 737.49 0.00 0.00 2705.55 1505.77 11546.82 00:26:09.897 =================================================================================================================== 00:26:09.897 Total : 5899.95 737.49 0.00 0.00 2705.55 1505.77 11546.82 00:26:09.897 0 00:26:09.897 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:09.897 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:09.897 | .driver_specific 00:26:09.897 | .nvme_error 00:26:09.897 | .status_code 00:26:09.897 | .command_transient_transport_error' 00:26:09.897 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:09.897 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:10.155 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 381 > 0 )) 00:26:10.155 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79717 00:26:10.155 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 79717 ']' 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 79717 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79717 00:26:10.156 killing process with pid 79717 00:26:10.156 Received shutdown signal, test time was about 2.000000 seconds 00:26:10.156 00:26:10.156 Latency(us) 00:26:10.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.156 =================================================================================================================== 00:26:10.156 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79717' 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 79717 00:26:10.156 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 79717 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79504 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 79504 ']' 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 79504 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79504 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79504' 00:26:10.413 killing process with pid 79504 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 79504 00:26:10.413 [2024-05-15 09:19:22.764168] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:10.413 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 79504 00:26:10.671 00:26:10.671 real 0m19.053s 00:26:10.671 user 0m36.386s 00:26:10.671 sys 0m5.184s 00:26:10.671 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:10.671 09:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:10.671 ************************************ 00:26:10.671 END TEST nvmf_digest_error 00:26:10.671 ************************************ 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.671 rmmod nvme_tcp 00:26:10.671 rmmod nvme_fabrics 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79504 ']' 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79504 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 79504 ']' 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 79504 00:26:10.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (79504) - No such process 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 79504 is not found' 00:26:10.671 Process with pid 79504 is not found 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.671 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.950 09:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:10.950 00:26:10.950 real 0m38.698s 00:26:10.950 user 1m12.828s 00:26:10.950 sys 0m10.908s 00:26:10.950 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:10.950 ************************************ 00:26:10.950 END TEST nvmf_digest 00:26:10.950 09:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:10.950 ************************************ 00:26:10.950 09:19:23 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:10.950 09:19:23 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:26:10.950 09:19:23 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:10.950 09:19:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:10.950 09:19:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:10.950 09:19:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.950 ************************************ 00:26:10.950 START TEST nvmf_host_multipath 00:26:10.950 ************************************ 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:10.950 * Looking for test storage... 00:26:10.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.950 09:19:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:10.951 Cannot find device "nvmf_tgt_br" 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.951 Cannot find device "nvmf_tgt_br2" 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:10.951 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:11.213 Cannot find device "nvmf_tgt_br" 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:11.213 Cannot find device "nvmf_tgt_br2" 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:11.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:11.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:11.213 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:11.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:26:11.472 00:26:11.472 --- 10.0.0.2 ping statistics --- 00:26:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.472 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:11.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:11.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:26:11.472 00:26:11.472 --- 10.0.0.3 ping statistics --- 00:26:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.472 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:11.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:11.472 00:26:11.472 --- 10.0.0.1 ping statistics --- 00:26:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.472 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=79976 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 79976 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@828 -- # '[' -z 79976 ']' 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:11.472 09:19:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 [2024-05-15 09:19:23.758178] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:26:11.472 [2024-05-15 09:19:23.759180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.730 [2024-05-15 09:19:23.921948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:11.730 [2024-05-15 09:19:24.062722] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.730 [2024-05-15 09:19:24.063030] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.730 [2024-05-15 09:19:24.063208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.730 [2024-05-15 09:19:24.063354] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.730 [2024-05-15 09:19:24.063407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.730 [2024-05-15 09:19:24.063641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.730 [2024-05-15 09:19:24.063659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@861 -- # return 0 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79976 00:26:12.664 09:19:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:12.664 [2024-05-15 09:19:25.087472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.922 09:19:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:12.922 Malloc0 00:26:12.922 09:19:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:13.487 09:19:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.745 09:19:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.002 [2024-05-15 09:19:26.202732] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:14.002 [2024-05-15 09:19:26.203337] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.002 09:19:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.259 [2024-05-15 09:19:26.452258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:14.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80032 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80032 /var/tmp/bdevperf.sock 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@828 -- # '[' -z 80032 ']' 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:14.259 09:19:26 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:15.194 09:19:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:15.194 09:19:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@861 -- # return 0 00:26:15.194 09:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:15.452 09:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:15.711 Nvme0n1 00:26:15.711 09:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:16.278 Nvme0n1 00:26:16.278 09:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:26:16.278 09:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:17.224 09:19:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:26:17.224 09:19:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.483 09:19:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:17.741 09:19:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:26:17.741 09:19:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80083 00:26:17.741 09:19:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:17.741 09:19:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:24.298 Attaching 4 probes... 00:26:24.298 @path[10.0.0.2, 4421]: 15670 00:26:24.298 @path[10.0.0.2, 4421]: 16387 00:26:24.298 @path[10.0.0.2, 4421]: 16140 00:26:24.298 @path[10.0.0.2, 4421]: 16390 00:26:24.298 @path[10.0.0.2, 4421]: 16398 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80083 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.298 09:19:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:24.864 09:19:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:26:24.864 09:19:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80196 00:26:24.864 09:19:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:24.864 09:19:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:31.423 Attaching 4 probes... 00:26:31.423 @path[10.0.0.2, 4420]: 16385 00:26:31.423 @path[10.0.0.2, 4420]: 17057 00:26:31.423 @path[10.0.0.2, 4420]: 16896 00:26:31.423 @path[10.0.0.2, 4420]: 17078 00:26:31.423 @path[10.0.0.2, 4420]: 17132 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80196 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:31.423 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:31.700 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:26:31.700 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80314 00:26:31.700 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:31.700 09:19:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:38.291 09:19:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:38.291 09:19:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:38.291 Attaching 4 probes... 00:26:38.291 @path[10.0.0.2, 4421]: 14033 00:26:38.291 @path[10.0.0.2, 4421]: 16224 00:26:38.291 @path[10.0.0.2, 4421]: 16452 00:26:38.291 @path[10.0.0.2, 4421]: 16382 00:26:38.291 @path[10.0.0.2, 4421]: 15920 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80314 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80427 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:38.291 09:19:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:44.902 Attaching 4 probes... 00:26:44.902 00:26:44.902 00:26:44.902 00:26:44.902 00:26:44.902 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80427 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:26:44.902 09:19:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:44.902 09:19:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:45.160 09:19:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:26:45.160 09:19:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:45.160 09:19:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80539 00:26:45.160 09:19:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:51.718 Attaching 4 probes... 00:26:51.718 @path[10.0.0.2, 4421]: 18092 00:26:51.718 @path[10.0.0.2, 4421]: 18392 00:26:51.718 @path[10.0.0.2, 4421]: 18392 00:26:51.718 @path[10.0.0.2, 4421]: 18319 00:26:51.718 @path[10.0.0.2, 4421]: 18607 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80539 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:51.718 09:20:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.976 09:20:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:52.910 09:20:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:52.910 09:20:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80663 00:26:52.910 09:20:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:52.910 09:20:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:59.506 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:59.506 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:59.507 Attaching 4 probes... 00:26:59.507 @path[10.0.0.2, 4420]: 17014 00:26:59.507 @path[10.0.0.2, 4420]: 12170 00:26:59.507 @path[10.0.0.2, 4420]: 10010 00:26:59.507 @path[10.0.0.2, 4420]: 18202 00:26:59.507 @path[10.0.0.2, 4420]: 18224 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80663 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:59.507 [2024-05-15 09:20:11.812606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.507 09:20:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:59.765 09:20:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:27:06.324 09:20:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:27:06.324 09:20:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79976 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:06.324 09:20:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80843 00:27:06.324 09:20:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:12.897 Attaching 4 probes... 00:27:12.897 @path[10.0.0.2, 4421]: 18102 00:27:12.897 @path[10.0.0.2, 4421]: 17080 00:27:12.897 @path[10.0.0.2, 4421]: 18775 00:27:12.897 @path[10.0.0.2, 4421]: 19493 00:27:12.897 @path[10.0.0.2, 4421]: 16827 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80843 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80032 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@947 -- # '[' -z 80032 ']' 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # kill -0 80032 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # uname 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 80032 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 80032' 00:27:12.897 killing process with pid 80032 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # kill 80032 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@971 -- # wait 80032 00:27:12.897 Connection closed with partial response: 00:27:12.897 00:27:12.897 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80032 00:27:12.897 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:12.897 [2024-05-15 09:19:26.525971] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:27:12.897 [2024-05-15 09:19:26.526116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80032 ] 00:27:12.897 [2024-05-15 09:19:26.667691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.897 [2024-05-15 09:19:26.797404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.897 Running I/O for 90 seconds... 00:27:12.897 [2024-05-15 09:19:36.983228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.897 [2024-05-15 09:19:36.983873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.897 [2024-05-15 09:19:36.983888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.983910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.983926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.983949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.983964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.983987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.984003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.984969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.984996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.898 [2024-05-15 09:19:36.985449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.898 [2024-05-15 09:19:36.985471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.898 [2024-05-15 09:19:36.985488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.985964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.985980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.986672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.986959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.986983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.899 [2024-05-15 09:19:36.987000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.987022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.987038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.987061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.987077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.987099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.987115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.987138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.987154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.899 [2024-05-15 09:19:36.987176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.899 [2024-05-15 09:19:36.987193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.987904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.987943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.987967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.987983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.988006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.988022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.988045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.988061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.988084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.988101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.988124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.988142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.988182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.989786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.900 [2024-05-15 09:19:36.989830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.989861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.989877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.989900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.989917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.989939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.989955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.989990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.990029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.990068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.990107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.990161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.990201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:36.990240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:36.990256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:43.612127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:43.612204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:43.612264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:43.612283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:43.612307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:43.612323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:43.612346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:43.612363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.900 [2024-05-15 09:19:43.612385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.900 [2024-05-15 09:19:43.612401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.612787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.612825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.612863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.612902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.612955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.612978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.612995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.901 [2024-05-15 09:19:43.613548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.901 [2024-05-15 09:19:43.613772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.901 [2024-05-15 09:19:43.613788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.613810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.613825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.613849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.613864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.613886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.613902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.613924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.613940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.613970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.613986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.614025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.614063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.614101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.614138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.614176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.614979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.614995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.902 [2024-05-15 09:19:43.615711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.615752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.615792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.615832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.615900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.615942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.902 [2024-05-15 09:19:43.615968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.902 [2024-05-15 09:19:43.615984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.616414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.616971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.616986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.903 [2024-05-15 09:19:43.617359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.903 [2024-05-15 09:19:43.617695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.903 [2024-05-15 09:19:43.617720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:43.617735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.617760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:43.617776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.617800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.617816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.617841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.617856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.617882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.617897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.617922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.617937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.617962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.617978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.618002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.618018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.618043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.618059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:43.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:43.618105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.695763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.695798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.695831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.695874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.695924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.695959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.695980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.695993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.696027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.904 [2024-05-15 09:19:50.696062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.904 [2024-05-15 09:19:50.696590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.904 [2024-05-15 09:19:50.696609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.696622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.696641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.696654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.697422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.697966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.697979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.905 [2024-05-15 09:19:50.698254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.698306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.698340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.698374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.698408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.698441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.905 [2024-05-15 09:19:50.698463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.905 [2024-05-15 09:19:50.698476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.698509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.698553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.698867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.698907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.698943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.698980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.906 [2024-05-15 09:19:50.699472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.906 [2024-05-15 09:19:50.699786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.906 [2024-05-15 09:19:50.699811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.699825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.699847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.699872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.699895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.699910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.699932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.699946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.699968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.699983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:19:50.700391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:19:50.700697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.907 [2024-05-15 09:19:50.700712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.907 [2024-05-15 09:20:04.131903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.907 [2024-05-15 09:20:04.131920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.131935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.131953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.131985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.132434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.908 [2024-05-15 09:20:04.132978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.132995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.908 [2024-05-15 09:20:04.133282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.908 [2024-05-15 09:20:04.133299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.133834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.133983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.133998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.909 [2024-05-15 09:20:04.134844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.909 [2024-05-15 09:20:04.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.909 [2024-05-15 09:20:04.134931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.134954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.134974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.134997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.910 [2024-05-15 09:20:04.135930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.135961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.135982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.136026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.136071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.136116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.136159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.136203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.910 [2024-05-15 09:20:04.136246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:12.910 [2024-05-15 09:20:04.136346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:12.910 [2024-05-15 09:20:04.136363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16488 len:8 PRP1 0x0 PRP2 0x0 00:27:12.910 [2024-05-15 09:20:04.136384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.910 [2024-05-15 09:20:04.136462] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1822f50 was disconnected and freed. reset controller. 00:27:12.910 [2024-05-15 09:20:04.137681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.910 [2024-05-15 09:20:04.137802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1830980 (9): Bad file descriptor 00:27:12.910 [2024-05-15 09:20:04.138199] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.910 [2024-05-15 09:20:04.138300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.910 [2024-05-15 09:20:04.138364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.910 [2024-05-15 09:20:04.138389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1830980 with addr=10.0.0.2, port=4421 00:27:12.910 [2024-05-15 09:20:04.138415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1830980 is same with the state(5) to be set 00:27:12.910 [2024-05-15 09:20:04.138465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1830980 (9): Bad file descriptor 00:27:12.911 [2024-05-15 09:20:04.138517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.911 [2024-05-15 09:20:04.138537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.911 [2024-05-15 09:20:04.138583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.911 [2024-05-15 09:20:04.138867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.911 [2024-05-15 09:20:04.138896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.911 [2024-05-15 09:20:14.198046] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:12.911 Received shutdown signal, test time was about 55.880557 seconds 00:27:12.911 00:27:12.911 Latency(us) 00:27:12.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.911 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:12.911 Verification LBA range: start 0x0 length 0x4000 00:27:12.911 Nvme0n1 : 55.88 7346.05 28.70 0.00 0.00 17397.45 1107.87 7030452.42 00:27:12.911 =================================================================================================================== 00:27:12.911 Total : 7346.05 28.70 0.00 0.00 17397.45 1107.87 7030452.42 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.911 09:20:24 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.911 rmmod nvme_tcp 00:27:12.911 rmmod nvme_fabrics 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 79976 ']' 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 79976 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@947 -- # '[' -z 79976 ']' 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # kill -0 79976 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # uname 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79976 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79976' 00:27:12.911 killing process with pid 79976 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # kill 79976 00:27:12.911 [2024-05-15 09:20:25.043194] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@971 -- # wait 79976 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.911 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.190 09:20:25 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:13.190 ************************************ 00:27:13.190 END TEST nvmf_host_multipath 00:27:13.190 ************************************ 00:27:13.190 00:27:13.190 real 1m2.147s 00:27:13.190 user 2m49.054s 00:27:13.190 sys 0m22.545s 00:27:13.190 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:13.190 09:20:25 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:13.190 09:20:25 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:13.190 09:20:25 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:13.190 09:20:25 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:13.190 09:20:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.190 ************************************ 00:27:13.190 START TEST nvmf_timeout 00:27:13.190 ************************************ 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:13.190 * Looking for test storage... 00:27:13.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:13.190 Cannot find device "nvmf_tgt_br" 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:13.190 Cannot find device "nvmf_tgt_br2" 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:27:13.190 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:13.191 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:13.191 Cannot find device "nvmf_tgt_br" 00:27:13.191 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:27:13.191 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:13.191 Cannot find device "nvmf_tgt_br2" 00:27:13.191 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:27:13.191 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:13.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:13.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:13.449 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:13.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:27:13.708 00:27:13.708 --- 10.0.0.2 ping statistics --- 00:27:13.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.708 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:13.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:13.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:27:13.708 00:27:13.708 --- 10.0.0.3 ping statistics --- 00:27:13.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.708 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:13.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:27:13.708 00:27:13.708 --- 10.0.0.1 ping statistics --- 00:27:13.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.708 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81148 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81148 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 81148 ']' 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:13.708 09:20:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.708 [2024-05-15 09:20:26.029076] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:27:13.708 [2024-05-15 09:20:26.029351] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.965 [2024-05-15 09:20:26.168673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:13.965 [2024-05-15 09:20:26.285950] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.965 [2024-05-15 09:20:26.286251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.965 [2024-05-15 09:20:26.286444] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.965 [2024-05-15 09:20:26.286790] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.965 [2024-05-15 09:20:26.286880] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.965 [2024-05-15 09:20:26.287105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.965 [2024-05-15 09:20:26.287112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:14.897 09:20:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:15.155 [2024-05-15 09:20:27.462319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.155 09:20:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:15.412 Malloc0 00:27:15.412 09:20:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.669 09:20:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:16.298 09:20:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.556 [2024-05-15 09:20:28.746628] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:16.556 [2024-05-15 09:20:28.747213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81203 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81203 /var/tmp/bdevperf.sock 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 81203 ']' 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:16.556 09:20:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.556 [2024-05-15 09:20:28.819034] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:27:16.556 [2024-05-15 09:20:28.819603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81203 ] 00:27:16.556 [2024-05-15 09:20:28.955454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.813 [2024-05-15 09:20:29.077521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.813 09:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:16.813 09:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:27:16.813 09:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:17.071 09:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:17.329 NVMe0n1 00:27:17.586 09:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81219 00:27:17.587 09:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:17.587 09:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:27:17.587 Running I/O for 10 seconds... 00:27:18.520 09:20:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.782 [2024-05-15 09:20:31.026168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.782 [2024-05-15 09:20:31.026837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.782 [2024-05-15 09:20:31.026975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.782 [2024-05-15 09:20:31.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.026995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.783 [2024-05-15 09:20:31.027362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.783 [2024-05-15 09:20:31.027739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.783 [2024-05-15 09:20:31.027749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.027911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.027957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.027980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.027991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.784 [2024-05-15 09:20:31.028446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.784 [2024-05-15 09:20:31.028587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.784 [2024-05-15 09:20:31.028599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.785 [2024-05-15 09:20:31.028809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.028976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.028986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.029018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.029039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.029061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.785 [2024-05-15 09:20:31.029082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:18.785 [2024-05-15 09:20:31.029136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:18.785 [2024-05-15 09:20:31.029146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91064 len:8 PRP1 0x0 PRP2 0x0 00:27:18.785 [2024-05-15 09:20:31.029155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029208] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e8420 was disconnected and freed. reset controller. 00:27:18.785 [2024-05-15 09:20:31.029284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.785 [2024-05-15 09:20:31.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.785 [2024-05-15 09:20:31.029317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.785 [2024-05-15 09:20:31.029336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.785 [2024-05-15 09:20:31.029356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.785 [2024-05-15 09:20:31.029366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ce0 is same with the state(5) to be set 00:27:18.785 [2024-05-15 09:20:31.029580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:18.785 [2024-05-15 09:20:31.029599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1898ce0 (9): Bad file descriptor 00:27:18.785 [2024-05-15 09:20:31.029684] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.785 [2024-05-15 09:20:31.029738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.785 [2024-05-15 09:20:31.029771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.785 [2024-05-15 09:20:31.029784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1898ce0 with addr=10.0.0.2, port=4420 00:27:18.785 [2024-05-15 09:20:31.029794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ce0 is same with the state(5) to be set 00:27:18.785 [2024-05-15 09:20:31.029810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1898ce0 (9): Bad file descriptor 00:27:18.785 [2024-05-15 09:20:31.029825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:18.785 [2024-05-15 09:20:31.029835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:18.785 [2024-05-15 09:20:31.029847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:18.785 [2024-05-15 09:20:31.029864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:18.785 [2024-05-15 09:20:31.029874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:18.785 09:20:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:27:20.687 [2024-05-15 09:20:33.030087] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.687 [2024-05-15 09:20:33.030169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.687 [2024-05-15 09:20:33.030205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.687 [2024-05-15 09:20:33.030219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1898ce0 with addr=10.0.0.2, port=4420 00:27:20.687 [2024-05-15 09:20:33.030233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ce0 is same with the state(5) to be set 00:27:20.687 [2024-05-15 09:20:33.030259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1898ce0 (9): Bad file descriptor 00:27:20.687 [2024-05-15 09:20:33.030276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:20.687 [2024-05-15 09:20:33.030287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:20.687 [2024-05-15 09:20:33.030298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.687 [2024-05-15 09:20:33.030322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:20.687 [2024-05-15 09:20:33.030332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:20.687 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:27:20.687 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.687 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:27:20.943 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:27:20.943 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:27:20.943 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:27:20.943 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:27:21.201 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:27:21.201 09:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:27:23.103 [2024-05-15 09:20:35.030515] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.103 [2024-05-15 09:20:35.030628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.103 [2024-05-15 09:20:35.030662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.103 [2024-05-15 09:20:35.030676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1898ce0 with addr=10.0.0.2, port=4420 00:27:23.103 [2024-05-15 09:20:35.030690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ce0 is same with the state(5) to be set 00:27:23.103 [2024-05-15 09:20:35.030715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1898ce0 (9): Bad file descriptor 00:27:23.103 [2024-05-15 09:20:35.030741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.103 [2024-05-15 09:20:35.030752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.103 [2024-05-15 09:20:35.030764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.103 [2024-05-15 09:20:35.030788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.103 [2024-05-15 09:20:35.030798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.004 [2024-05-15 09:20:37.030863] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.641 00:27:25.641 Latency(us) 00:27:25.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.641 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:25.641 Verification LBA range: start 0x0 length 0x4000 00:27:25.641 NVMe0n1 : 8.14 1390.88 5.43 15.73 0.00 91028.83 3292.40 7030452.42 00:27:25.641 =================================================================================================================== 00:27:25.641 Total : 1390.88 5.43 15.73 0.00 91028.83 3292.40 7030452.42 00:27:25.641 0 00:27:26.209 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:27:26.209 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:26.209 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:27:26.468 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:27:26.468 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:27:26.468 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:27:26.468 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 81219 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81203 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 81203 ']' 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 81203 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:26.727 09:20:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 81203 00:27:26.727 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:27:26.727 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:27:26.727 killing process with pid 81203 00:27:26.727 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 81203' 00:27:26.727 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 81203 00:27:26.727 Received shutdown signal, test time was about 9.121332 seconds 00:27:26.727 00:27:26.727 Latency(us) 00:27:26.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.727 =================================================================================================================== 00:27:26.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.727 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 81203 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.986 [2024-05-15 09:20:39.408336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81335 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81335 /var/tmp/bdevperf.sock 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 81335 ']' 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:26.986 09:20:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:27.245 [2024-05-15 09:20:39.480051] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:27:27.245 [2024-05-15 09:20:39.480151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81335 ] 00:27:27.245 [2024-05-15 09:20:39.626293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.503 [2024-05-15 09:20:39.730125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.438 09:20:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:28.438 09:20:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:27:28.438 09:20:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:28.438 09:20:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:27:29.005 NVMe0n1 00:27:29.005 09:20:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81359 00:27:29.005 09:20:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:27:29.005 09:20:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:29.005 Running I/O for 10 seconds... 00:27:29.939 09:20:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.201 [2024-05-15 09:20:42.413695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.413984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.413994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.201 [2024-05-15 09:20:42.414290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.201 [2024-05-15 09:20:42.414388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.201 [2024-05-15 09:20:42.414398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.414831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.414985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.414996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.202 [2024-05-15 09:20:42.415269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.202 [2024-05-15 09:20:42.415290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.202 [2024-05-15 09:20:42.415302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.203 [2024-05-15 09:20:42.415950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.415984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.415994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.416017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.416038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.416062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.416084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.203 [2024-05-15 09:20:42.416105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244f390 is same with the state(5) to be set 00:27:30.203 [2024-05-15 09:20:42.416130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.203 [2024-05-15 09:20:42.416139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.203 [2024-05-15 09:20:42.416148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:27:30.203 [2024-05-15 09:20:42.416158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.203 [2024-05-15 09:20:42.416177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.203 [2024-05-15 09:20:42.416186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:27:30.203 [2024-05-15 09:20:42.416196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.203 [2024-05-15 09:20:42.416217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.203 [2024-05-15 09:20:42.416226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:27:30.203 [2024-05-15 09:20:42.416235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.203 [2024-05-15 09:20:42.416246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.416963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.416974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.204 [2024-05-15 09:20:42.416983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.204 [2024-05-15 09:20:42.416992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:27:30.204 [2024-05-15 09:20:42.417002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.204 [2024-05-15 09:20:42.417051] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x244f390 was disconnected and freed. reset controller. 00:27:30.204 [2024-05-15 09:20:42.417281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.204 [2024-05-15 09:20:42.417363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:30.204 [2024-05-15 09:20:42.417451] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.204 [2024-05-15 09:20:42.417525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.204 [2024-05-15 09:20:42.417574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.204 [2024-05-15 09:20:42.417589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ffce0 with addr=10.0.0.2, port=4420 00:27:30.204 [2024-05-15 09:20:42.417601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ffce0 is same with the state(5) to be set 00:27:30.204 [2024-05-15 09:20:42.417618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:30.204 [2024-05-15 09:20:42.417634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.204 [2024-05-15 09:20:42.417644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.204 [2024-05-15 09:20:42.417656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.204 [2024-05-15 09:20:42.417677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.204 [2024-05-15 09:20:42.417688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.204 09:20:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:27:31.140 [2024-05-15 09:20:43.417807] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.140 [2024-05-15 09:20:43.417897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.140 [2024-05-15 09:20:43.417931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.140 [2024-05-15 09:20:43.417944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ffce0 with addr=10.0.0.2, port=4420 00:27:31.140 [2024-05-15 09:20:43.417957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ffce0 is same with the state(5) to be set 00:27:31.140 [2024-05-15 09:20:43.417978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:31.140 [2024-05-15 09:20:43.417995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.140 [2024-05-15 09:20:43.418005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.140 [2024-05-15 09:20:43.418016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.140 [2024-05-15 09:20:43.418038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.140 [2024-05-15 09:20:43.418049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.140 09:20:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.399 [2024-05-15 09:20:43.720162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.399 09:20:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 81359 00:27:32.336 [2024-05-15 09:20:44.432981] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:38.895 00:27:38.895 Latency(us) 00:27:38.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.895 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:38.895 Verification LBA range: start 0x0 length 0x4000 00:27:38.895 NVMe0n1 : 10.01 7102.32 27.74 0.00 0.00 17991.80 3042.74 3019898.88 00:27:38.895 =================================================================================================================== 00:27:38.895 Total : 7102.32 27.74 0.00 0.00 17991.80 3042.74 3019898.88 00:27:38.895 0 00:27:38.895 09:20:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81469 00:27:38.895 09:20:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:27:38.895 09:20:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:39.154 Running I/O for 10 seconds... 00:27:40.136 09:20:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.136 [2024-05-15 09:20:52.557503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.557992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.136 [2024-05-15 09:20:52.558111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6b240 is same with the state(5) to be set 00:27:40.137 [2024-05-15 09:20:52.558538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.558982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.558994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.137 [2024-05-15 09:20:52.559175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.137 [2024-05-15 09:20:52.559187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.559979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.559990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.138 [2024-05-15 09:20:52.560287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.138 [2024-05-15 09:20:52.560300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.560983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.560997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.561008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.139 [2024-05-15 09:20:52.561031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.139 [2024-05-15 09:20:52.561054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.139 [2024-05-15 09:20:52.561078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.139 [2024-05-15 09:20:52.561101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.139 [2024-05-15 09:20:52.561123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.139 [2024-05-15 09:20:52.561146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.139 [2024-05-15 09:20:52.561159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.139 [2024-05-15 09:20:52.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:40.140 [2024-05-15 09:20:52.561677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.140 [2024-05-15 09:20:52.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.140 [2024-05-15 09:20:52.561723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.140 [2024-05-15 09:20:52.561745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.140 [2024-05-15 09:20:52.561770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.140 [2024-05-15 09:20:52.561792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.140 [2024-05-15 09:20:52.561804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247e860 is same with the state(5) to be set 00:27:40.140 [2024-05-15 09:20:52.561819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:40.140 [2024-05-15 09:20:52.561828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:40.141 [2024-05-15 09:20:52.561837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73112 len:8 PRP1 0x0 PRP2 0x0 00:27:40.141 [2024-05-15 09:20:52.561847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.141 [2024-05-15 09:20:52.561902] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x247e860 was disconnected and freed. reset controller. 00:27:40.141 [2024-05-15 09:20:52.562143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.141 [2024-05-15 09:20:52.562225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:40.141 [2024-05-15 09:20:52.562333] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.141 [2024-05-15 09:20:52.562384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.141 [2024-05-15 09:20:52.562433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.141 [2024-05-15 09:20:52.562449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ffce0 with addr=10.0.0.2, port=4420 00:27:40.141 [2024-05-15 09:20:52.562460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ffce0 is same with the state(5) to be set 00:27:40.141 [2024-05-15 09:20:52.562478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:40.141 [2024-05-15 09:20:52.562494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.141 [2024-05-15 09:20:52.562505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.141 [2024-05-15 09:20:52.562517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.141 [2024-05-15 09:20:52.562537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.141 [2024-05-15 09:20:52.562563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.399 09:20:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:27:41.330 [2024-05-15 09:20:53.562704] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.330 [2024-05-15 09:20:53.562835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.330 [2024-05-15 09:20:53.562876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.330 [2024-05-15 09:20:53.562893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ffce0 with addr=10.0.0.2, port=4420 00:27:41.330 [2024-05-15 09:20:53.562909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ffce0 is same with the state(5) to be set 00:27:41.330 [2024-05-15 09:20:53.562941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:41.330 [2024-05-15 09:20:53.562962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.330 [2024-05-15 09:20:53.562975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.330 [2024-05-15 09:20:53.562989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.330 [2024-05-15 09:20:53.563016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.330 [2024-05-15 09:20:53.563030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.262 [2024-05-15 09:20:54.563192] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.262 [2024-05-15 09:20:54.563297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.262 [2024-05-15 09:20:54.563336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.262 [2024-05-15 09:20:54.563351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ffce0 with addr=10.0.0.2, port=4420 00:27:42.262 [2024-05-15 09:20:54.563366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ffce0 is same with the state(5) to be set 00:27:42.262 [2024-05-15 09:20:54.563393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:42.262 [2024-05-15 09:20:54.563413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.262 [2024-05-15 09:20:54.563423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.262 [2024-05-15 09:20:54.563435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.262 [2024-05-15 09:20:54.563462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.262 [2024-05-15 09:20:54.563475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:43.197 [2024-05-15 09:20:55.566336] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.197 [2024-05-15 09:20:55.566434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.197 [2024-05-15 09:20:55.566472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.197 [2024-05-15 09:20:55.566487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ffce0 with addr=10.0.0.2, port=4420 00:27:43.198 [2024-05-15 09:20:55.566501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ffce0 is same with the state(5) to be set 00:27:43.198 [2024-05-15 09:20:55.566740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffce0 (9): Bad file descriptor 00:27:43.198 [2024-05-15 09:20:55.566957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:43.198 [2024-05-15 09:20:55.566968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:43.198 [2024-05-15 09:20:55.566979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:43.198 [2024-05-15 09:20:55.570401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.198 [2024-05-15 09:20:55.570439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:43.198 09:20:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.456 [2024-05-15 09:20:55.854005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.456 09:20:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 81469 00:27:44.388 [2024-05-15 09:20:56.599806] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:49.673 00:27:49.673 Latency(us) 00:27:49.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.673 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:49.673 Verification LBA range: start 0x0 length 0x4000 00:27:49.673 NVMe0n1 : 10.01 6000.79 23.44 4278.54 0.00 12423.58 577.34 3019898.88 00:27:49.673 =================================================================================================================== 00:27:49.673 Total : 6000.79 23.44 4278.54 0.00 12423.58 0.00 3019898.88 00:27:49.673 0 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81335 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 81335 ']' 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 81335 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 81335 00:27:49.673 killing process with pid 81335 00:27:49.673 Received shutdown signal, test time was about 10.000000 seconds 00:27:49.673 00:27:49.673 Latency(us) 00:27:49.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.673 =================================================================================================================== 00:27:49.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 81335' 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 81335 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 81335 00:27:49.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81578 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81578 /var/tmp/bdevperf.sock 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 81578 ']' 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:49.673 09:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.673 [2024-05-15 09:21:01.724645] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:27:49.673 [2024-05-15 09:21:01.724768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81578 ] 00:27:49.673 [2024-05-15 09:21:01.873719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.673 [2024-05-15 09:21:01.976518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.618 09:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:50.618 09:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:27:50.618 09:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81594 00:27:50.618 09:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:50.618 09:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81578 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:50.618 09:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:50.877 NVMe0n1 00:27:51.137 09:21:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81640 00:27:51.137 09:21:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:51.137 09:21:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:51.137 Running I/O for 10 seconds... 00:27:52.073 09:21:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.335 [2024-05-15 09:21:04.614760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.614994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78740 is same with the state(5) to be set 00:27:52.335 [2024-05-15 09:21:04.615453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.335 [2024-05-15 09:21:04.615486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.615978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.615989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.336 [2024-05-15 09:21:04.616399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.336 [2024-05-15 09:21:04.616409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.616974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.616996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.337 [2024-05-15 09:21:04.617307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.337 [2024-05-15 09:21:04.617317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.617985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.617996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.338 [2024-05-15 09:21:04.618160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.338 [2024-05-15 09:21:04.618170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.339 [2024-05-15 09:21:04.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.339 [2024-05-15 09:21:04.618190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.339 [2024-05-15 09:21:04.618201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.339 [2024-05-15 09:21:04.618211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.339 [2024-05-15 09:21:04.618226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.339 [2024-05-15 09:21:04.618236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.339 [2024-05-15 09:21:04.618247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.339 [2024-05-15 09:21:04.618258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.339 [2024-05-15 09:21:04.618269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf88550 is same with the state(5) to be set 00:27:52.339 [2024-05-15 09:21:04.618282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:52.339 [2024-05-15 09:21:04.618289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:52.339 [2024-05-15 09:21:04.618298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112568 len:8 PRP1 0x0 PRP2 0x0 00:27:52.339 [2024-05-15 09:21:04.618307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.339 [2024-05-15 09:21:04.618354] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf88550 was disconnected and freed. reset controller. 00:27:52.339 [2024-05-15 09:21:04.618596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.339 [2024-05-15 09:21:04.618663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3bf50 (9): Bad file descriptor 00:27:52.339 [2024-05-15 09:21:04.618755] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.339 [2024-05-15 09:21:04.618809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.339 [2024-05-15 09:21:04.618843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.339 [2024-05-15 09:21:04.618856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf3bf50 with addr=10.0.0.2, port=4420 00:27:52.339 [2024-05-15 09:21:04.618866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3bf50 is same with the state(5) to be set 00:27:52.339 [2024-05-15 09:21:04.618881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3bf50 (9): Bad file descriptor 00:27:52.339 [2024-05-15 09:21:04.618895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.339 [2024-05-15 09:21:04.618905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.339 [2024-05-15 09:21:04.618916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.339 [2024-05-15 09:21:04.618933] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.339 [2024-05-15 09:21:04.618943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.339 09:21:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 81640 00:27:54.238 [2024-05-15 09:21:06.619286] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.238 [2024-05-15 09:21:06.619443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.238 [2024-05-15 09:21:06.619503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.238 [2024-05-15 09:21:06.619525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf3bf50 with addr=10.0.0.2, port=4420 00:27:54.238 [2024-05-15 09:21:06.619563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3bf50 is same with the state(5) to be set 00:27:54.238 [2024-05-15 09:21:06.619625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3bf50 (9): Bad file descriptor 00:27:54.238 [2024-05-15 09:21:06.619651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:54.238 [2024-05-15 09:21:06.619665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:54.238 [2024-05-15 09:21:06.619683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.238 [2024-05-15 09:21:06.619720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.238 [2024-05-15 09:21:06.619737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.766 [2024-05-15 09:21:08.619948] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-05-15 09:21:08.620050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-05-15 09:21:08.620089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-05-15 09:21:08.620111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf3bf50 with addr=10.0.0.2, port=4420 00:27:56.766 [2024-05-15 09:21:08.620125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3bf50 is same with the state(5) to be set 00:27:56.766 [2024-05-15 09:21:08.620152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3bf50 (9): Bad file descriptor 00:27:56.766 [2024-05-15 09:21:08.620172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.766 [2024-05-15 09:21:08.620182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.766 [2024-05-15 09:21:08.620195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.766 [2024-05-15 09:21:08.620220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.766 [2024-05-15 09:21:08.620232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.666 [2024-05-15 09:21:10.620328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.232 00:27:59.232 Latency(us) 00:27:59.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.232 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:59.232 NVMe0n1 : 8.21 2484.19 9.70 15.60 0.00 51234.33 6709.64 7030452.42 00:27:59.232 =================================================================================================================== 00:27:59.232 Total : 2484.19 9.70 15.60 0.00 51234.33 6709.64 7030452.42 00:27:59.232 0 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:59.232 Attaching 5 probes... 00:27:59.232 1306.438646: reset bdev controller NVMe0 00:27:59.232 1306.548400: reconnect bdev controller NVMe0 00:27:59.232 3306.940741: reconnect delay bdev controller NVMe0 00:27:59.232 3306.978433: reconnect bdev controller NVMe0 00:27:59.232 5307.652089: reconnect delay bdev controller NVMe0 00:27:59.232 5307.676768: reconnect bdev controller NVMe0 00:27:59.232 7308.134026: reconnect delay bdev controller NVMe0 00:27:59.232 7308.152395: reconnect bdev controller NVMe0 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 81594 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81578 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 81578 ']' 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 81578 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:59.232 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 81578 00:27:59.491 killing process with pid 81578 00:27:59.491 Received shutdown signal, test time was about 8.265997 seconds 00:27:59.491 00:27:59.491 Latency(us) 00:27:59.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.491 =================================================================================================================== 00:27:59.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.491 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:27:59.491 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:27:59.491 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 81578' 00:27:59.491 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 81578 00:27:59.491 09:21:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 81578 00:27:59.491 09:21:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:59.749 09:21:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:59.749 09:21:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:59.749 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.749 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.007 rmmod nvme_tcp 00:28:00.007 rmmod nvme_fabrics 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81148 ']' 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81148 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 81148 ']' 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 81148 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 81148 00:28:00.007 killing process with pid 81148 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 81148' 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 81148 00:28:00.007 [2024-05-15 09:21:12.280295] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:00.007 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 81148 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:00.265 ************************************ 00:28:00.265 END TEST nvmf_timeout 00:28:00.265 ************************************ 00:28:00.265 00:28:00.265 real 0m47.169s 00:28:00.265 user 2m17.323s 00:28:00.265 sys 0m6.692s 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:00.265 09:21:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.265 09:21:12 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:28:00.265 09:21:12 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:28:00.265 09:21:12 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:00.265 09:21:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.265 09:21:12 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:00.265 ************************************ 00:28:00.265 END TEST nvmf_tcp 00:28:00.265 ************************************ 00:28:00.265 00:28:00.265 real 12m8.408s 00:28:00.265 user 29m12.461s 00:28:00.265 sys 3m23.761s 00:28:00.265 09:21:12 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:00.265 09:21:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.524 09:21:12 -- spdk/autotest.sh@284 -- # [[ 1 -eq 0 ]] 00:28:00.524 09:21:12 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:28:00.524 09:21:12 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:00.524 09:21:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:00.524 09:21:12 -- common/autotest_common.sh@10 -- # set +x 00:28:00.524 ************************************ 00:28:00.524 START TEST nvmf_dif 00:28:00.524 ************************************ 00:28:00.524 09:21:12 nvmf_dif -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:28:00.524 * Looking for test storage... 00:28:00.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:00.524 09:21:12 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:00.524 09:21:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.524 09:21:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.524 09:21:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.524 09:21:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.524 09:21:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.524 09:21:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.524 09:21:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:00.524 09:21:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.524 09:21:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:00.524 09:21:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:00.524 09:21:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:00.524 09:21:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:00.524 09:21:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.524 09:21:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:00.524 09:21:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:00.524 09:21:12 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:00.525 Cannot find device "nvmf_tgt_br" 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@155 -- # true 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:00.525 Cannot find device "nvmf_tgt_br2" 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@156 -- # true 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:00.525 Cannot find device "nvmf_tgt_br" 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@158 -- # true 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:00.525 Cannot find device "nvmf_tgt_br2" 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@159 -- # true 00:28:00.525 09:21:12 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:00.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@162 -- # true 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:00.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@163 -- # true 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:00.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:28:00.784 00:28:00.784 --- 10.0.0.2 ping statistics --- 00:28:00.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.784 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:00.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:00.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:28:00.784 00:28:00.784 --- 10.0.0.3 ping statistics --- 00:28:00.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.784 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:28:00.784 09:21:13 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:01.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:28:01.042 00:28:01.042 --- 10.0.0.1 ping statistics --- 00:28:01.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.042 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:28:01.042 09:21:13 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.042 09:21:13 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:28:01.042 09:21:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:01.042 09:21:13 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:01.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:01.301 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:01.301 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.301 09:21:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:01.301 09:21:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82074 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82074 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 82074 ']' 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:01.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:01.301 09:21:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.301 09:21:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:01.560 [2024-05-15 09:21:13.745408] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:28:01.560 [2024-05-15 09:21:13.745694] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.560 [2024-05-15 09:21:13.884530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.560 [2024-05-15 09:21:14.004602] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.872 [2024-05-15 09:21:14.004871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.872 [2024-05-15 09:21:14.005046] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.872 [2024-05-15 09:21:14.005341] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.872 [2024-05-15 09:21:14.005386] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.872 [2024-05-15 09:21:14.005514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:28:02.443 09:21:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.443 09:21:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.443 09:21:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:02.443 09:21:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.443 [2024-05-15 09:21:14.825795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.443 09:21:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:02.443 09:21:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.443 ************************************ 00:28:02.443 START TEST fio_dif_1_default 00:28:02.443 ************************************ 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.443 bdev_null0 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.443 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.444 [2024-05-15 09:21:14.881733] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:02.444 [2024-05-15 09:21:14.882121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.444 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.703 { 00:28:02.703 "params": { 00:28:02.703 "name": "Nvme$subsystem", 00:28:02.703 "trtype": "$TEST_TRANSPORT", 00:28:02.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.703 "adrfam": "ipv4", 00:28:02.703 "trsvcid": "$NVMF_PORT", 00:28:02.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.703 "hdgst": ${hdgst:-false}, 00:28:02.703 "ddgst": ${ddgst:-false} 00:28:02.703 }, 00:28:02.703 "method": "bdev_nvme_attach_controller" 00:28:02.703 } 00:28:02.703 EOF 00:28:02.703 )") 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.703 "params": { 00:28:02.703 "name": "Nvme0", 00:28:02.703 "trtype": "tcp", 00:28:02.703 "traddr": "10.0.0.2", 00:28:02.703 "adrfam": "ipv4", 00:28:02.703 "trsvcid": "4420", 00:28:02.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.703 "hdgst": false, 00:28:02.703 "ddgst": false 00:28:02.703 }, 00:28:02.703 "method": "bdev_nvme_attach_controller" 00:28:02.703 }' 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:02.703 09:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.703 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:02.703 fio-3.35 00:28:02.703 Starting 1 thread 00:28:14.902 00:28:14.902 filename0: (groupid=0, jobs=1): err= 0: pid=82141: Wed May 15 09:21:25 2024 00:28:14.902 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(399MiB/10001msec) 00:28:14.902 slat (usec): min=5, max=100, avg= 7.28, stdev= 1.82 00:28:14.902 clat (usec): min=297, max=3487, avg=371.35, stdev=53.66 00:28:14.902 lat (usec): min=303, max=3523, avg=378.62, stdev=54.23 00:28:14.902 clat percentiles (usec): 00:28:14.902 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 338], 00:28:14.902 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:28:14.902 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 424], 00:28:14.902 | 99.00th=[ 502], 99.50th=[ 701], 99.90th=[ 766], 99.95th=[ 791], 00:28:14.902 | 99.99th=[ 1926] 00:28:14.902 bw ( KiB/s): min=33632, max=44376, per=100.00%, avg=41032.74, stdev=2742.88, samples=19 00:28:14.902 iops : min= 8408, max=11094, avg=10258.11, stdev=685.73, samples=19 00:28:14.902 lat (usec) : 500=98.99%, 750=0.84%, 1000=0.14% 00:28:14.902 lat (msec) : 2=0.02%, 4=0.01% 00:28:14.902 cpu : usr=81.46%, sys=16.93%, ctx=31, majf=0, minf=0 00:28:14.902 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.902 issued rwts: total=102092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.902 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:14.902 00:28:14.902 Run status group 0 (all jobs): 00:28:14.902 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=399MiB (418MB), run=10001-10001msec 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.902 00:28:14.902 ************************************ 00:28:14.902 END TEST fio_dif_1_default 00:28:14.902 ************************************ 00:28:14.902 real 0m10.992s 00:28:14.902 user 0m8.764s 00:28:14.902 sys 0m1.982s 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:14.902 09:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:14.902 09:21:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:14.903 09:21:25 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:14.903 09:21:25 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 ************************************ 00:28:14.903 START TEST fio_dif_1_multi_subsystems 00:28:14.903 ************************************ 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 bdev_null0 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 [2024-05-15 09:21:25.937717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 bdev_null1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.903 { 00:28:14.903 "params": { 00:28:14.903 "name": "Nvme$subsystem", 00:28:14.903 "trtype": "$TEST_TRANSPORT", 00:28:14.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.903 "adrfam": "ipv4", 00:28:14.903 "trsvcid": "$NVMF_PORT", 00:28:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.903 "hdgst": ${hdgst:-false}, 00:28:14.903 "ddgst": ${ddgst:-false} 00:28:14.903 }, 00:28:14.903 "method": "bdev_nvme_attach_controller" 00:28:14.903 } 00:28:14.903 EOF 00:28:14.903 )") 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.903 { 00:28:14.903 "params": { 00:28:14.903 "name": "Nvme$subsystem", 00:28:14.903 "trtype": "$TEST_TRANSPORT", 00:28:14.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.903 "adrfam": "ipv4", 00:28:14.903 "trsvcid": "$NVMF_PORT", 00:28:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.903 "hdgst": ${hdgst:-false}, 00:28:14.903 "ddgst": ${ddgst:-false} 00:28:14.903 }, 00:28:14.903 "method": "bdev_nvme_attach_controller" 00:28:14.903 } 00:28:14.903 EOF 00:28:14.903 )") 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.903 09:21:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:14.903 "params": { 00:28:14.903 "name": "Nvme0", 00:28:14.903 "trtype": "tcp", 00:28:14.903 "traddr": "10.0.0.2", 00:28:14.903 "adrfam": "ipv4", 00:28:14.903 "trsvcid": "4420", 00:28:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.903 "hdgst": false, 00:28:14.903 "ddgst": false 00:28:14.903 }, 00:28:14.903 "method": "bdev_nvme_attach_controller" 00:28:14.903 },{ 00:28:14.903 "params": { 00:28:14.903 "name": "Nvme1", 00:28:14.903 "trtype": "tcp", 00:28:14.903 "traddr": "10.0.0.2", 00:28:14.903 "adrfam": "ipv4", 00:28:14.903 "trsvcid": "4420", 00:28:14.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.903 "hdgst": false, 00:28:14.903 "ddgst": false 00:28:14.903 }, 00:28:14.903 "method": "bdev_nvme_attach_controller" 00:28:14.903 }' 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:14.903 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:14.904 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:14.904 09:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.904 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.904 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.904 fio-3.35 00:28:14.904 Starting 2 threads 00:28:24.872 00:28:24.872 filename0: (groupid=0, jobs=1): err= 0: pid=82304: Wed May 15 09:21:36 2024 00:28:24.872 read: IOPS=5281, BW=20.6MiB/s (21.6MB/s)(206MiB/10001msec) 00:28:24.872 slat (nsec): min=6022, max=64799, avg=13196.01, stdev=3611.53 00:28:24.872 clat (usec): min=375, max=2983, avg=722.01, stdev=46.06 00:28:24.872 lat (usec): min=383, max=3010, avg=735.21, stdev=46.99 00:28:24.872 clat percentiles (usec): 00:28:24.872 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:28:24.872 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 734], 00:28:24.872 | 70.00th=[ 742], 80.00th=[ 750], 90.00th=[ 766], 95.00th=[ 783], 00:28:24.872 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 1012], 99.95th=[ 1057], 00:28:24.872 | 99.99th=[ 1139] 00:28:24.872 bw ( KiB/s): min=20480, max=21728, per=50.08%, avg=21186.21, stdev=354.28, samples=19 00:28:24.872 iops : min= 5120, max= 5432, avg=5296.53, stdev=88.56, samples=19 00:28:24.872 lat (usec) : 500=0.01%, 750=80.27%, 1000=19.61% 00:28:24.872 lat (msec) : 2=0.11%, 4=0.01% 00:28:24.872 cpu : usr=89.02%, sys=9.94%, ctx=9, majf=0, minf=0 00:28:24.872 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.872 issued rwts: total=52820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.872 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:24.872 filename1: (groupid=0, jobs=1): err= 0: pid=82305: Wed May 15 09:21:36 2024 00:28:24.872 read: IOPS=5295, BW=20.7MiB/s (21.7MB/s)(207MiB/10001msec) 00:28:24.872 slat (nsec): min=4766, max=46978, avg=13042.35, stdev=3063.82 00:28:24.872 clat (usec): min=360, max=3731, avg=720.65, stdev=52.43 00:28:24.872 lat (usec): min=367, max=3756, avg=733.69, stdev=53.12 00:28:24.872 clat percentiles (usec): 00:28:24.872 | 1.00th=[ 611], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 693], 00:28:24.872 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 725], 60.00th=[ 734], 00:28:24.872 | 70.00th=[ 742], 80.00th=[ 750], 90.00th=[ 766], 95.00th=[ 783], 00:28:24.872 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 930], 99.95th=[ 947], 00:28:24.872 | 99.99th=[ 1037] 00:28:24.872 bw ( KiB/s): min=20480, max=21856, per=50.21%, avg=21244.63, stdev=365.62, samples=19 00:28:24.872 iops : min= 5120, max= 5464, avg=5311.16, stdev=91.40, samples=19 00:28:24.872 lat (usec) : 500=0.27%, 750=78.37%, 1000=21.34% 00:28:24.872 lat (msec) : 2=0.01%, 4=0.01% 00:28:24.872 cpu : usr=89.19%, sys=9.76%, ctx=45, majf=0, minf=9 00:28:24.872 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.872 issued rwts: total=52960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.872 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:24.872 00:28:24.872 Run status group 0 (all jobs): 00:28:24.872 READ: bw=41.3MiB/s (43.3MB/s), 20.6MiB/s-20.7MiB/s (21.6MB/s-21.7MB/s), io=413MiB (433MB), run=10001-10001msec 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.872 00:28:24.872 real 0m11.111s 00:28:24.872 user 0m18.594s 00:28:24.872 sys 0m2.247s 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:24.872 09:21:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 ************************************ 00:28:24.872 END TEST fio_dif_1_multi_subsystems 00:28:24.872 ************************************ 00:28:24.872 09:21:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:24.872 09:21:37 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:24.872 09:21:37 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:24.872 09:21:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 ************************************ 00:28:24.872 START TEST fio_dif_rand_params 00:28:24.872 ************************************ 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 bdev_null0 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.872 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.873 [2024-05-15 09:21:37.089219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.873 { 00:28:24.873 "params": { 00:28:24.873 "name": "Nvme$subsystem", 00:28:24.873 "trtype": "$TEST_TRANSPORT", 00:28:24.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.873 "adrfam": "ipv4", 00:28:24.873 "trsvcid": "$NVMF_PORT", 00:28:24.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.873 "hdgst": ${hdgst:-false}, 00:28:24.873 "ddgst": ${ddgst:-false} 00:28:24.873 }, 00:28:24.873 "method": "bdev_nvme_attach_controller" 00:28:24.873 } 00:28:24.873 EOF 00:28:24.873 )") 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:24.873 "params": { 00:28:24.873 "name": "Nvme0", 00:28:24.873 "trtype": "tcp", 00:28:24.873 "traddr": "10.0.0.2", 00:28:24.873 "adrfam": "ipv4", 00:28:24.873 "trsvcid": "4420", 00:28:24.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.873 "hdgst": false, 00:28:24.873 "ddgst": false 00:28:24.873 }, 00:28:24.873 "method": "bdev_nvme_attach_controller" 00:28:24.873 }' 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:24.873 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.873 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:24.873 ... 00:28:24.873 fio-3.35 00:28:24.873 Starting 3 threads 00:28:31.489 00:28:31.489 filename0: (groupid=0, jobs=1): err= 0: pid=82455: Wed May 15 09:21:42 2024 00:28:31.489 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5001msec) 00:28:31.489 slat (nsec): min=3612, max=85968, avg=20738.49, stdev=9577.68 00:28:31.489 clat (usec): min=10515, max=18630, avg=11443.01, stdev=1234.98 00:28:31.489 lat (usec): min=10523, max=18647, avg=11463.75, stdev=1235.72 00:28:31.489 clat percentiles (usec): 00:28:31.489 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:28:31.489 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11207], 00:28:31.489 | 70.00th=[11338], 80.00th=[11338], 90.00th=[11863], 95.00th=[12518], 00:28:31.489 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:28:31.489 | 99.99th=[18744] 00:28:31.489 bw ( KiB/s): min=29184, max=35328, per=33.37%, avg=33450.67, stdev=1764.36, samples=9 00:28:31.489 iops : min= 228, max= 276, avg=261.33, stdev=13.78, samples=9 00:28:31.489 lat (msec) : 20=100.00% 00:28:31.489 cpu : usr=88.40%, sys=10.36%, ctx=87, majf=0, minf=9 00:28:31.489 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.489 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.489 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.489 filename0: (groupid=0, jobs=1): err= 0: pid=82456: Wed May 15 09:21:42 2024 00:28:31.489 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5004msec) 00:28:31.489 slat (nsec): min=7064, max=67694, avg=21242.40, stdev=9118.23 00:28:31.489 clat (usec): min=7899, max=18628, avg=11424.65, stdev=1210.91 00:28:31.489 lat (usec): min=7914, max=18669, avg=11445.89, stdev=1211.43 00:28:31.489 clat percentiles (usec): 00:28:31.489 | 1.00th=[10683], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:28:31.489 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11207], 00:28:31.489 | 70.00th=[11338], 80.00th=[11338], 90.00th=[11994], 95.00th=[12387], 00:28:31.489 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18482], 99.95th=[18744], 00:28:31.489 | 99.99th=[18744] 00:28:31.489 bw ( KiB/s): min=29184, max=35328, per=33.33%, avg=33408.00, stdev=1819.22, samples=10 00:28:31.489 iops : min= 228, max= 276, avg=261.00, stdev=14.21, samples=10 00:28:31.489 lat (msec) : 10=0.23%, 20=99.77% 00:28:31.489 cpu : usr=88.83%, sys=10.19%, ctx=15, majf=0, minf=9 00:28:31.489 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.489 issued rwts: total=1308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.489 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.489 filename0: (groupid=0, jobs=1): err= 0: pid=82457: Wed May 15 09:21:42 2024 00:28:31.489 read: IOPS=261, BW=32.7MiB/s (34.2MB/s)(164MiB/5007msec) 00:28:31.489 slat (nsec): min=6976, max=85928, avg=21332.82, stdev=9208.18 00:28:31.489 clat (usec): min=7893, max=18644, avg=11429.31, stdev=1224.73 00:28:31.489 lat (usec): min=7907, max=18685, avg=11450.64, stdev=1225.59 00:28:31.489 clat percentiles (usec): 00:28:31.489 | 1.00th=[10683], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:28:31.489 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11207], 00:28:31.489 | 70.00th=[11338], 80.00th=[11338], 90.00th=[11863], 95.00th=[12518], 00:28:31.490 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:28:31.490 | 99.99th=[18744] 00:28:31.490 bw ( KiB/s): min=29184, max=35328, per=33.33%, avg=33408.00, stdev=1668.92, samples=10 00:28:31.490 iops : min= 228, max= 276, avg=261.00, stdev=13.04, samples=10 00:28:31.490 lat (msec) : 10=0.23%, 20=99.77% 00:28:31.490 cpu : usr=88.41%, sys=10.55%, ctx=10, majf=0, minf=9 00:28:31.490 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.490 issued rwts: total=1308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.490 00:28:31.490 Run status group 0 (all jobs): 00:28:31.490 READ: bw=97.9MiB/s (103MB/s), 32.6MiB/s-32.7MiB/s (34.2MB/s-34.3MB/s), io=490MiB (514MB), run=5001-5007msec 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 bdev_null0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 [2024-05-15 09:21:43.131059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 bdev_null1 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 bdev_null2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.490 { 00:28:31.490 "params": { 00:28:31.490 "name": "Nvme$subsystem", 00:28:31.490 "trtype": "$TEST_TRANSPORT", 00:28:31.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.490 "adrfam": "ipv4", 00:28:31.490 "trsvcid": "$NVMF_PORT", 00:28:31.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.490 "hdgst": ${hdgst:-false}, 00:28:31.490 "ddgst": ${ddgst:-false} 00:28:31.490 }, 00:28:31.490 "method": "bdev_nvme_attach_controller" 00:28:31.490 } 00:28:31.490 EOF 00:28:31.490 )") 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:31.490 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.491 { 00:28:31.491 "params": { 00:28:31.491 "name": "Nvme$subsystem", 00:28:31.491 "trtype": "$TEST_TRANSPORT", 00:28:31.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.491 "adrfam": "ipv4", 00:28:31.491 "trsvcid": "$NVMF_PORT", 00:28:31.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.491 "hdgst": ${hdgst:-false}, 00:28:31.491 "ddgst": ${ddgst:-false} 00:28:31.491 }, 00:28:31.491 "method": "bdev_nvme_attach_controller" 00:28:31.491 } 00:28:31.491 EOF 00:28:31.491 )") 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.491 { 00:28:31.491 "params": { 00:28:31.491 "name": "Nvme$subsystem", 00:28:31.491 "trtype": "$TEST_TRANSPORT", 00:28:31.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.491 "adrfam": "ipv4", 00:28:31.491 "trsvcid": "$NVMF_PORT", 00:28:31.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.491 "hdgst": ${hdgst:-false}, 00:28:31.491 "ddgst": ${ddgst:-false} 00:28:31.491 }, 00:28:31.491 "method": "bdev_nvme_attach_controller" 00:28:31.491 } 00:28:31.491 EOF 00:28:31.491 )") 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:31.491 "params": { 00:28:31.491 "name": "Nvme0", 00:28:31.491 "trtype": "tcp", 00:28:31.491 "traddr": "10.0.0.2", 00:28:31.491 "adrfam": "ipv4", 00:28:31.491 "trsvcid": "4420", 00:28:31.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.491 "hdgst": false, 00:28:31.491 "ddgst": false 00:28:31.491 }, 00:28:31.491 "method": "bdev_nvme_attach_controller" 00:28:31.491 },{ 00:28:31.491 "params": { 00:28:31.491 "name": "Nvme1", 00:28:31.491 "trtype": "tcp", 00:28:31.491 "traddr": "10.0.0.2", 00:28:31.491 "adrfam": "ipv4", 00:28:31.491 "trsvcid": "4420", 00:28:31.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.491 "hdgst": false, 00:28:31.491 "ddgst": false 00:28:31.491 }, 00:28:31.491 "method": "bdev_nvme_attach_controller" 00:28:31.491 },{ 00:28:31.491 "params": { 00:28:31.491 "name": "Nvme2", 00:28:31.491 "trtype": "tcp", 00:28:31.491 "traddr": "10.0.0.2", 00:28:31.491 "adrfam": "ipv4", 00:28:31.491 "trsvcid": "4420", 00:28:31.491 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:31.491 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:31.491 "hdgst": false, 00:28:31.491 "ddgst": false 00:28:31.491 }, 00:28:31.491 "method": "bdev_nvme_attach_controller" 00:28:31.491 }' 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:31.491 09:21:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.491 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:31.491 ... 00:28:31.491 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:31.491 ... 00:28:31.491 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:31.491 ... 00:28:31.491 fio-3.35 00:28:31.491 Starting 24 threads 00:28:43.732 00:28:43.732 filename0: (groupid=0, jobs=1): err= 0: pid=82558: Wed May 15 09:21:54 2024 00:28:43.732 read: IOPS=118, BW=475KiB/s (486kB/s)(4760KiB/10021msec) 00:28:43.732 slat (usec): min=3, max=10166, avg=31.22, stdev=294.29 00:28:43.732 clat (msec): min=29, max=384, avg=134.56, stdev=77.00 00:28:43.732 lat (msec): min=29, max=384, avg=134.59, stdev=77.01 00:28:43.732 clat percentiles (msec): 00:28:43.732 | 1.00th=[ 33], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 63], 00:28:43.732 | 30.00th=[ 74], 40.00th=[ 90], 50.00th=[ 107], 60.00th=[ 144], 00:28:43.732 | 70.00th=[ 176], 80.00th=[ 213], 90.00th=[ 243], 95.00th=[ 268], 00:28:43.732 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 384], 00:28:43.732 | 99.99th=[ 384] 00:28:43.732 bw ( KiB/s): min= 128, max= 944, per=4.32%, avg=472.40, stdev=267.60, samples=20 00:28:43.732 iops : min= 32, max= 236, avg=118.10, stdev=66.90, samples=20 00:28:43.732 lat (msec) : 50=3.61%, 100=44.71%, 250=44.37%, 500=7.31% 00:28:43.732 cpu : usr=35.93%, sys=2.83%, ctx=596, majf=0, minf=9 00:28:43.732 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:28:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.732 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.732 issued rwts: total=1190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.732 filename0: (groupid=0, jobs=1): err= 0: pid=82559: Wed May 15 09:21:54 2024 00:28:43.732 read: IOPS=122, BW=490KiB/s (502kB/s)(4908KiB/10018msec) 00:28:43.732 slat (usec): min=6, max=105, avg=20.69, stdev=10.57 00:28:43.732 clat (msec): min=18, max=572, avg=130.52, stdev=83.55 00:28:43.732 lat (msec): min=18, max=572, avg=130.54, stdev=83.55 00:28:43.732 clat percentiles (msec): 00:28:43.732 | 1.00th=[ 22], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 58], 00:28:43.732 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 103], 60.00th=[ 132], 00:28:43.732 | 70.00th=[ 169], 80.00th=[ 207], 90.00th=[ 239], 95.00th=[ 262], 00:28:43.732 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 575], 99.95th=[ 575], 00:28:43.732 | 99.99th=[ 575] 00:28:43.732 bw ( KiB/s): min= 112, max= 992, per=4.20%, avg=458.11, stdev=272.59, samples=19 00:28:43.732 iops : min= 28, max= 248, avg=114.53, stdev=68.15, samples=19 00:28:43.732 lat (msec) : 20=0.16%, 50=6.85%, 100=42.46%, 250=42.54%, 500=7.82% 00:28:43.732 lat (msec) : 750=0.16% 00:28:43.732 cpu : usr=40.45%, sys=3.35%, ctx=525, majf=0, minf=9 00:28:43.732 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.732 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.732 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.732 filename0: (groupid=0, jobs=1): err= 0: pid=82560: Wed May 15 09:21:54 2024 00:28:43.732 read: IOPS=109, BW=438KiB/s (449kB/s)(4412KiB/10063msec) 00:28:43.732 slat (usec): min=4, max=9044, avg=31.89, stdev=333.38 00:28:43.732 clat (msec): min=25, max=405, avg=145.47, stdev=92.95 00:28:43.732 lat (msec): min=25, max=405, avg=145.50, stdev=92.95 00:28:43.732 clat percentiles (msec): 00:28:43.732 | 1.00th=[ 26], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 67], 00:28:43.732 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 103], 60.00th=[ 171], 00:28:43.732 | 70.00th=[ 213], 80.00th=[ 234], 90.00th=[ 271], 95.00th=[ 326], 00:28:43.732 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 405], 99.95th=[ 405], 00:28:43.732 | 99.99th=[ 405] 00:28:43.732 bw ( KiB/s): min= 144, max= 872, per=3.98%, avg=434.80, stdev=272.20, samples=20 00:28:43.732 iops : min= 36, max= 218, avg=108.70, stdev=68.05, samples=20 00:28:43.732 lat (msec) : 50=4.99%, 100=44.79%, 250=36.08%, 500=14.14% 00:28:43.732 cpu : usr=31.85%, sys=2.66%, ctx=591, majf=0, minf=9 00:28:43.732 IO depths : 1=0.2%, 2=3.0%, 4=11.8%, 8=69.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:28:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.732 complete : 0=0.0%, 4=91.1%, 8=6.3%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.732 issued rwts: total=1103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.732 filename0: (groupid=0, jobs=1): err= 0: pid=82561: Wed May 15 09:21:54 2024 00:28:43.732 read: IOPS=118, BW=472KiB/s (484kB/s)(4752KiB/10058msec) 00:28:43.732 slat (usec): min=5, max=21063, avg=49.18, stdev=664.37 00:28:43.732 clat (msec): min=34, max=354, avg=135.03, stdev=77.59 00:28:43.732 lat (msec): min=34, max=354, avg=135.07, stdev=77.62 00:28:43.732 clat percentiles (msec): 00:28:43.732 | 1.00th=[ 36], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 62], 00:28:43.732 | 30.00th=[ 71], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 150], 00:28:43.732 | 70.00th=[ 182], 80.00th=[ 222], 90.00th=[ 245], 95.00th=[ 271], 00:28:43.732 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 355], 00:28:43.732 | 99.99th=[ 355] 00:28:43.732 bw ( KiB/s): min= 256, max= 912, per=4.29%, avg=468.85, stdev=254.05, samples=20 00:28:43.732 iops : min= 64, max= 228, avg=117.20, stdev=63.51, samples=20 00:28:43.732 lat (msec) : 50=3.20%, 100=48.06%, 250=39.81%, 500=8.92% 00:28:43.732 cpu : usr=33.91%, sys=2.50%, ctx=585, majf=0, minf=9 00:28:43.732 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=77.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename0: (groupid=0, jobs=1): err= 0: pid=82562: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=119, BW=479KiB/s (490kB/s)(4796KiB/10015msec) 00:28:43.733 slat (usec): min=7, max=16032, avg=35.27, stdev=462.49 00:28:43.733 clat (msec): min=23, max=457, avg=133.48, stdev=79.49 00:28:43.733 lat (msec): min=23, max=457, avg=133.51, stdev=79.48 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 30], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 64], 00:28:43.733 | 30.00th=[ 75], 40.00th=[ 88], 50.00th=[ 104], 60.00th=[ 134], 00:28:43.733 | 70.00th=[ 171], 80.00th=[ 213], 90.00th=[ 239], 95.00th=[ 275], 00:28:43.733 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 456], 99.95th=[ 456], 00:28:43.733 | 99.99th=[ 456] 00:28:43.733 bw ( KiB/s): min= 128, max= 992, per=4.17%, avg=456.00, stdev=264.59, samples=19 00:28:43.733 iops : min= 32, max= 248, avg=114.00, stdev=66.15, samples=19 00:28:43.733 lat (msec) : 50=6.84%, 100=42.70%, 250=41.53%, 500=8.92% 00:28:43.733 cpu : usr=40.05%, sys=2.82%, ctx=587, majf=0, minf=9 00:28:43.733 IO depths : 1=0.1%, 2=0.4%, 4=1.9%, 8=81.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename0: (groupid=0, jobs=1): err= 0: pid=82563: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=118, BW=473KiB/s (484kB/s)(4736KiB/10017msec) 00:28:43.733 slat (usec): min=7, max=186, avg=21.10, stdev=12.50 00:28:43.733 clat (msec): min=20, max=346, avg=135.24, stdev=79.74 00:28:43.733 lat (msec): min=20, max=346, avg=135.26, stdev=79.74 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 62], 00:28:43.733 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 102], 60.00th=[ 155], 00:28:43.733 | 70.00th=[ 192], 80.00th=[ 220], 90.00th=[ 249], 95.00th=[ 268], 00:28:43.733 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:28:43.733 | 99.99th=[ 347] 00:28:43.733 bw ( KiB/s): min= 208, max= 1000, per=4.10%, avg=447.58, stdev=277.16, samples=19 00:28:43.733 iops : min= 52, max= 250, avg=111.89, stdev=69.29, samples=19 00:28:43.733 lat (msec) : 50=4.81%, 100=44.85%, 250=41.89%, 500=8.45% 00:28:43.733 cpu : usr=39.94%, sys=2.85%, ctx=477, majf=0, minf=9 00:28:43.733 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename0: (groupid=0, jobs=1): err= 0: pid=82564: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=112, BW=451KiB/s (462kB/s)(4516KiB/10008msec) 00:28:43.733 slat (usec): min=3, max=18053, avg=33.73, stdev=536.87 00:28:43.733 clat (msec): min=14, max=501, avg=141.54, stdev=90.02 00:28:43.733 lat (msec): min=14, max=501, avg=141.58, stdev=90.01 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 29], 5.00th=[ 42], 10.00th=[ 54], 20.00th=[ 61], 00:28:43.733 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 100], 60.00th=[ 171], 00:28:43.733 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 255], 95.00th=[ 296], 00:28:43.733 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 502], 99.95th=[ 502], 00:28:43.733 | 99.99th=[ 502] 00:28:43.733 bw ( KiB/s): min= 128, max= 1040, per=3.88%, avg=423.58, stdev=284.80, samples=19 00:28:43.733 iops : min= 32, max= 260, avg=105.89, stdev=71.20, samples=19 00:28:43.733 lat (msec) : 20=0.53%, 50=8.15%, 100=42.07%, 250=37.29%, 500=11.78% 00:28:43.733 lat (msec) : 750=0.18% 00:28:43.733 cpu : usr=35.97%, sys=2.88%, ctx=710, majf=0, minf=9 00:28:43.733 IO depths : 1=0.1%, 2=2.6%, 4=10.8%, 8=71.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=90.4%, 8=7.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename0: (groupid=0, jobs=1): err= 0: pid=82565: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=107, BW=428KiB/s (439kB/s)(4288KiB/10012msec) 00:28:43.733 slat (usec): min=5, max=18048, avg=30.60, stdev=550.86 00:28:43.733 clat (msec): min=19, max=438, avg=149.32, stdev=88.61 00:28:43.733 lat (msec): min=19, max=438, avg=149.35, stdev=88.59 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 20], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 75], 00:28:43.733 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 108], 60.00th=[ 194], 00:28:43.733 | 70.00th=[ 215], 80.00th=[ 226], 90.00th=[ 271], 95.00th=[ 279], 00:28:43.733 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 439], 99.95th=[ 439], 00:28:43.733 | 99.99th=[ 439] 00:28:43.733 bw ( KiB/s): min= 128, max= 888, per=3.62%, avg=395.37, stdev=252.73, samples=19 00:28:43.733 iops : min= 32, max= 222, avg=98.84, stdev=63.18, samples=19 00:28:43.733 lat (msec) : 20=1.49%, 50=1.59%, 100=41.98%, 250=38.90%, 500=16.04% 00:28:43.733 cpu : usr=30.48%, sys=2.73%, ctx=439, majf=0, minf=9 00:28:43.733 IO depths : 1=0.2%, 2=3.6%, 4=14.4%, 8=67.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=91.6%, 8=5.1%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename1: (groupid=0, jobs=1): err= 0: pid=82566: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=113, BW=455KiB/s (466kB/s)(4564KiB/10034msec) 00:28:43.733 slat (usec): min=3, max=13047, avg=31.45, stdev=385.84 00:28:43.733 clat (msec): min=29, max=345, avg=140.45, stdev=86.79 00:28:43.733 lat (msec): min=29, max=345, avg=140.48, stdev=86.81 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 63], 00:28:43.733 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 99], 60.00th=[ 180], 00:28:43.733 | 70.00th=[ 209], 80.00th=[ 228], 90.00th=[ 251], 95.00th=[ 317], 00:28:43.733 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:28:43.733 | 99.99th=[ 347] 00:28:43.733 bw ( KiB/s): min= 144, max= 952, per=4.13%, avg=451.20, stdev=273.57, samples=20 00:28:43.733 iops : min= 36, max= 238, avg=112.80, stdev=68.39, samples=20 00:28:43.733 lat (msec) : 50=6.84%, 100=44.61%, 250=37.34%, 500=11.22% 00:28:43.733 cpu : usr=43.38%, sys=3.42%, ctx=666, majf=0, minf=9 00:28:43.733 IO depths : 1=0.1%, 2=3.4%, 4=13.8%, 8=68.3%, 16=14.4%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=91.3%, 8=5.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename1: (groupid=0, jobs=1): err= 0: pid=82567: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=110, BW=440KiB/s (451kB/s)(4404KiB/10007msec) 00:28:43.733 slat (usec): min=7, max=18033, avg=30.87, stdev=543.10 00:28:43.733 clat (msec): min=7, max=434, avg=145.14, stdev=91.07 00:28:43.733 lat (msec): min=7, max=434, avg=145.17, stdev=91.10 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 21], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 61], 00:28:43.733 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 103], 60.00th=[ 190], 00:28:43.733 | 70.00th=[ 215], 80.00th=[ 241], 90.00th=[ 268], 95.00th=[ 305], 00:28:43.733 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 435], 99.95th=[ 435], 00:28:43.733 | 99.99th=[ 435] 00:28:43.733 bw ( KiB/s): min= 128, max= 872, per=3.71%, avg=405.05, stdev=263.68, samples=19 00:28:43.733 iops : min= 32, max= 218, avg=101.26, stdev=65.92, samples=19 00:28:43.733 lat (msec) : 10=0.54%, 20=0.18%, 50=3.18%, 100=45.41%, 250=35.06% 00:28:43.733 lat (msec) : 500=15.62% 00:28:43.733 cpu : usr=31.06%, sys=2.55%, ctx=387, majf=0, minf=9 00:28:43.733 IO depths : 1=0.1%, 2=3.2%, 4=12.6%, 8=69.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=91.1%, 8=6.2%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename1: (groupid=0, jobs=1): err= 0: pid=82568: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=116, BW=465KiB/s (476kB/s)(4656KiB/10012msec) 00:28:43.733 slat (nsec): min=6506, max=92440, avg=21617.18, stdev=12572.85 00:28:43.733 clat (msec): min=30, max=340, avg=137.50, stdev=77.16 00:28:43.733 lat (msec): min=30, max=340, avg=137.52, stdev=77.16 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 37], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 62], 00:28:43.733 | 30.00th=[ 71], 40.00th=[ 89], 50.00th=[ 105], 60.00th=[ 157], 00:28:43.733 | 70.00th=[ 192], 80.00th=[ 222], 90.00th=[ 247], 95.00th=[ 257], 00:28:43.733 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:28:43.733 | 99.99th=[ 342] 00:28:43.733 bw ( KiB/s): min= 200, max= 904, per=4.22%, avg=461.60, stdev=266.38, samples=20 00:28:43.733 iops : min= 50, max= 226, avg=115.40, stdev=66.60, samples=20 00:28:43.733 lat (msec) : 50=3.01%, 100=46.82%, 250=41.75%, 500=8.42% 00:28:43.733 cpu : usr=36.13%, sys=3.07%, ctx=569, majf=0, minf=9 00:28:43.733 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.733 issued rwts: total=1164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.733 filename1: (groupid=0, jobs=1): err= 0: pid=82569: Wed May 15 09:21:54 2024 00:28:43.733 read: IOPS=111, BW=445KiB/s (455kB/s)(4460KiB/10031msec) 00:28:43.733 slat (usec): min=4, max=13994, avg=47.63, stdev=571.76 00:28:43.733 clat (msec): min=29, max=344, avg=143.44, stdev=86.34 00:28:43.733 lat (msec): min=29, max=344, avg=143.49, stdev=86.36 00:28:43.733 clat percentiles (msec): 00:28:43.733 | 1.00th=[ 30], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 64], 00:28:43.733 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 103], 60.00th=[ 180], 00:28:43.733 | 70.00th=[ 211], 80.00th=[ 232], 90.00th=[ 266], 95.00th=[ 296], 00:28:43.734 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 347], 00:28:43.734 | 99.99th=[ 347] 00:28:43.734 bw ( KiB/s): min= 144, max= 888, per=4.07%, avg=444.15, stdev=262.88, samples=20 00:28:43.734 iops : min= 36, max= 222, avg=111.00, stdev=65.66, samples=20 00:28:43.734 lat (msec) : 50=4.48%, 100=45.11%, 250=34.98%, 500=15.43% 00:28:43.734 cpu : usr=40.62%, sys=2.97%, ctx=435, majf=0, minf=9 00:28:43.734 IO depths : 1=0.1%, 2=3.4%, 4=13.5%, 8=68.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=91.5%, 8=5.5%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename1: (groupid=0, jobs=1): err= 0: pid=82570: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=118, BW=474KiB/s (485kB/s)(4768KiB/10063msec) 00:28:43.734 slat (nsec): min=4473, max=73625, avg=13861.24, stdev=7862.96 00:28:43.734 clat (msec): min=24, max=336, avg=134.70, stdev=81.12 00:28:43.734 lat (msec): min=24, max=336, avg=134.72, stdev=81.12 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 26], 5.00th=[ 49], 10.00th=[ 53], 20.00th=[ 62], 00:28:43.734 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 100], 60.00th=[ 148], 00:28:43.734 | 70.00th=[ 190], 80.00th=[ 230], 90.00th=[ 251], 95.00th=[ 271], 00:28:43.734 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:28:43.734 | 99.99th=[ 338] 00:28:43.734 bw ( KiB/s): min= 256, max= 920, per=4.31%, avg=470.40, stdev=262.85, samples=20 00:28:43.734 iops : min= 64, max= 230, avg=117.60, stdev=65.71, samples=20 00:28:43.734 lat (msec) : 50=5.96%, 100=45.30%, 250=38.84%, 500=9.90% 00:28:43.734 cpu : usr=36.48%, sys=2.76%, ctx=601, majf=0, minf=9 00:28:43.734 IO depths : 1=0.2%, 2=1.6%, 4=6.0%, 8=76.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename1: (groupid=0, jobs=1): err= 0: pid=82571: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=111, BW=446KiB/s (456kB/s)(4464KiB/10019msec) 00:28:43.734 slat (nsec): min=5541, max=64255, avg=14325.06, stdev=7439.68 00:28:43.734 clat (msec): min=17, max=363, avg=143.52, stdev=89.55 00:28:43.734 lat (msec): min=17, max=363, avg=143.54, stdev=89.55 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 26], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 58], 00:28:43.734 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 102], 60.00th=[ 186], 00:28:43.734 | 70.00th=[ 215], 80.00th=[ 239], 90.00th=[ 268], 95.00th=[ 284], 00:28:43.734 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 363], 99.95th=[ 363], 00:28:43.734 | 99.99th=[ 363] 00:28:43.734 bw ( KiB/s): min= 128, max= 928, per=3.76%, avg=410.95, stdev=273.29, samples=19 00:28:43.734 iops : min= 32, max= 232, avg=102.74, stdev=68.32, samples=19 00:28:43.734 lat (msec) : 20=0.36%, 50=5.11%, 100=44.09%, 250=36.47%, 500=13.98% 00:28:43.734 cpu : usr=30.46%, sys=2.66%, ctx=440, majf=0, minf=9 00:28:43.734 IO depths : 1=0.1%, 2=2.7%, 4=10.9%, 8=71.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=90.8%, 8=6.8%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename1: (groupid=0, jobs=1): err= 0: pid=82572: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=108, BW=433KiB/s (443kB/s)(4336KiB/10015msec) 00:28:43.734 slat (usec): min=7, max=11040, avg=28.29, stdev=334.92 00:28:43.734 clat (msec): min=14, max=441, avg=147.63, stdev=89.11 00:28:43.734 lat (msec): min=14, max=441, avg=147.66, stdev=89.12 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 64], 00:28:43.734 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 108], 60.00th=[ 197], 00:28:43.734 | 70.00th=[ 211], 80.00th=[ 239], 90.00th=[ 266], 95.00th=[ 296], 00:28:43.734 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 443], 99.95th=[ 443], 00:28:43.734 | 99.99th=[ 443] 00:28:43.734 bw ( KiB/s): min= 128, max= 888, per=3.64%, avg=397.47, stdev=253.87, samples=19 00:28:43.734 iops : min= 32, max= 222, avg=99.37, stdev=63.47, samples=19 00:28:43.734 lat (msec) : 20=0.18%, 50=4.89%, 100=41.14%, 250=40.87%, 500=12.92% 00:28:43.734 cpu : usr=31.38%, sys=2.39%, ctx=511, majf=0, minf=9 00:28:43.734 IO depths : 1=0.2%, 2=3.7%, 4=14.3%, 8=67.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=91.5%, 8=5.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename1: (groupid=0, jobs=1): err= 0: pid=82573: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=119, BW=477KiB/s (489kB/s)(4784KiB/10019msec) 00:28:43.734 slat (nsec): min=7625, max=57928, avg=19170.73, stdev=9964.98 00:28:43.734 clat (msec): min=27, max=355, avg=133.78, stdev=75.42 00:28:43.734 lat (msec): min=27, max=355, avg=133.80, stdev=75.42 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 64], 00:28:43.734 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 109], 60.00th=[ 146], 00:28:43.734 | 70.00th=[ 182], 80.00th=[ 218], 90.00th=[ 243], 95.00th=[ 264], 00:28:43.734 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:28:43.734 | 99.99th=[ 355] 00:28:43.734 bw ( KiB/s): min= 208, max= 960, per=4.18%, avg=456.42, stdev=261.00, samples=19 00:28:43.734 iops : min= 52, max= 240, avg=114.11, stdev=65.25, samples=19 00:28:43.734 lat (msec) : 50=5.10%, 100=42.81%, 250=43.56%, 500=8.53% 00:28:43.734 cpu : usr=40.61%, sys=3.37%, ctx=459, majf=0, minf=9 00:28:43.734 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename2: (groupid=0, jobs=1): err= 0: pid=82574: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=120, BW=482KiB/s (493kB/s)(4820KiB/10007msec) 00:28:43.734 slat (nsec): min=3555, max=60279, avg=19592.66, stdev=10621.02 00:28:43.734 clat (msec): min=29, max=349, avg=132.70, stdev=74.45 00:28:43.734 lat (msec): min=29, max=349, avg=132.72, stdev=74.45 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 66], 00:28:43.734 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 108], 60.00th=[ 140], 00:28:43.734 | 70.00th=[ 178], 80.00th=[ 213], 90.00th=[ 239], 95.00th=[ 259], 00:28:43.734 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 351], 00:28:43.734 | 99.99th=[ 351] 00:28:43.734 bw ( KiB/s): min= 256, max= 920, per=4.23%, avg=462.74, stdev=241.94, samples=19 00:28:43.734 iops : min= 64, max= 230, avg=115.68, stdev=60.48, samples=19 00:28:43.734 lat (msec) : 50=4.81%, 100=41.58%, 250=44.90%, 500=8.71% 00:28:43.734 cpu : usr=42.82%, sys=3.70%, ctx=459, majf=0, minf=9 00:28:43.734 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename2: (groupid=0, jobs=1): err= 0: pid=82575: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=101, BW=406KiB/s (416kB/s)(4076KiB/10028msec) 00:28:43.734 slat (usec): min=7, max=18035, avg=51.32, stdev=797.91 00:28:43.734 clat (msec): min=45, max=347, avg=157.01, stdev=80.55 00:28:43.734 lat (msec): min=45, max=347, avg=157.06, stdev=80.56 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 48], 5.00th=[ 55], 10.00th=[ 78], 20.00th=[ 81], 00:28:43.734 | 30.00th=[ 84], 40.00th=[ 108], 50.00th=[ 130], 60.00th=[ 199], 00:28:43.734 | 70.00th=[ 218], 80.00th=[ 239], 90.00th=[ 271], 95.00th=[ 300], 00:28:43.734 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:28:43.734 | 99.99th=[ 347] 00:28:43.734 bw ( KiB/s): min= 144, max= 824, per=3.67%, avg=401.20, stdev=208.91, samples=20 00:28:43.734 iops : min= 36, max= 206, avg=100.30, stdev=52.23, samples=20 00:28:43.734 lat (msec) : 50=1.37%, 100=33.56%, 250=54.27%, 500=10.79% 00:28:43.734 cpu : usr=31.58%, sys=2.45%, ctx=384, majf=0, minf=9 00:28:43.734 IO depths : 1=0.2%, 2=5.8%, 4=23.3%, 8=58.0%, 16=12.8%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename2: (groupid=0, jobs=1): err= 0: pid=82576: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=111, BW=444KiB/s (455kB/s)(4452KiB/10026msec) 00:28:43.734 slat (usec): min=6, max=18044, avg=29.66, stdev=540.50 00:28:43.734 clat (msec): min=28, max=341, avg=143.97, stdev=90.03 00:28:43.734 lat (msec): min=28, max=341, avg=144.00, stdev=90.01 00:28:43.734 clat percentiles (msec): 00:28:43.734 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 59], 00:28:43.734 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 184], 00:28:43.734 | 70.00th=[ 220], 80.00th=[ 241], 90.00th=[ 264], 95.00th=[ 296], 00:28:43.734 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 342], 00:28:43.734 | 99.99th=[ 342] 00:28:43.734 bw ( KiB/s): min= 128, max= 968, per=4.01%, avg=438.80, stdev=291.14, samples=20 00:28:43.734 iops : min= 32, max= 242, avg=109.70, stdev=72.79, samples=20 00:28:43.734 lat (msec) : 50=4.22%, 100=47.53%, 250=33.69%, 500=14.56% 00:28:43.734 cpu : usr=30.70%, sys=2.38%, ctx=441, majf=0, minf=9 00:28:43.734 IO depths : 1=0.1%, 2=3.0%, 4=11.8%, 8=70.4%, 16=14.7%, 32=0.0%, >=64=0.0% 00:28:43.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.734 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.734 filename2: (groupid=0, jobs=1): err= 0: pid=82577: Wed May 15 09:21:54 2024 00:28:43.734 read: IOPS=108, BW=434KiB/s (444kB/s)(4348KiB/10022msec) 00:28:43.734 slat (usec): min=3, max=11900, avg=27.22, stdev=360.62 00:28:43.735 clat (msec): min=28, max=366, avg=147.25, stdev=84.28 00:28:43.735 lat (msec): min=28, max=366, avg=147.28, stdev=84.28 00:28:43.735 clat percentiles (msec): 00:28:43.735 | 1.00th=[ 32], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 71], 00:28:43.735 | 30.00th=[ 79], 40.00th=[ 94], 50.00th=[ 104], 60.00th=[ 184], 00:28:43.735 | 70.00th=[ 211], 80.00th=[ 230], 90.00th=[ 264], 95.00th=[ 288], 00:28:43.735 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 368], 99.95th=[ 368], 00:28:43.735 | 99.99th=[ 368] 00:28:43.735 bw ( KiB/s): min= 144, max= 920, per=3.95%, avg=431.20, stdev=267.40, samples=20 00:28:43.735 iops : min= 36, max= 230, avg=107.80, stdev=66.85, samples=20 00:28:43.735 lat (msec) : 50=4.42%, 100=40.94%, 250=43.97%, 500=10.67% 00:28:43.735 cpu : usr=35.92%, sys=2.84%, ctx=634, majf=0, minf=9 00:28:43.735 IO depths : 1=0.1%, 2=3.4%, 4=14.1%, 8=67.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:28:43.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 complete : 0=0.0%, 4=91.7%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.735 filename2: (groupid=0, jobs=1): err= 0: pid=82578: Wed May 15 09:21:54 2024 00:28:43.735 read: IOPS=113, BW=454KiB/s (465kB/s)(4548KiB/10021msec) 00:28:43.735 slat (usec): min=4, max=18053, avg=42.20, stdev=611.57 00:28:43.735 clat (msec): min=31, max=354, avg=140.52, stdev=83.13 00:28:43.735 lat (msec): min=31, max=354, avg=140.56, stdev=83.15 00:28:43.735 clat percentiles (msec): 00:28:43.735 | 1.00th=[ 34], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 58], 00:28:43.735 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 105], 60.00th=[ 184], 00:28:43.735 | 70.00th=[ 209], 80.00th=[ 222], 90.00th=[ 245], 95.00th=[ 275], 00:28:43.735 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 355], 00:28:43.735 | 99.99th=[ 355] 00:28:43.735 bw ( KiB/s): min= 240, max= 912, per=4.12%, avg=450.80, stdev=280.35, samples=20 00:28:43.735 iops : min= 60, max= 228, avg=112.70, stdev=70.09, samples=20 00:28:43.735 lat (msec) : 50=2.37%, 100=46.35%, 250=41.69%, 500=9.59% 00:28:43.735 cpu : usr=31.81%, sys=2.69%, ctx=578, majf=0, minf=9 00:28:43.735 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=72.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:43.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 complete : 0=0.0%, 4=90.4%, 8=7.4%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.735 filename2: (groupid=0, jobs=1): err= 0: pid=82579: Wed May 15 09:21:54 2024 00:28:43.735 read: IOPS=119, BW=476KiB/s (488kB/s)(4784KiB/10041msec) 00:28:43.735 slat (usec): min=5, max=24031, avg=49.49, stdev=867.92 00:28:43.735 clat (msec): min=28, max=354, avg=134.09, stdev=70.88 00:28:43.735 lat (msec): min=28, max=354, avg=134.14, stdev=70.90 00:28:43.735 clat percentiles (msec): 00:28:43.735 | 1.00th=[ 36], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 75], 00:28:43.735 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 108], 60.00th=[ 140], 00:28:43.735 | 70.00th=[ 182], 80.00th=[ 213], 90.00th=[ 239], 95.00th=[ 249], 00:28:43.735 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:28:43.735 | 99.99th=[ 355] 00:28:43.735 bw ( KiB/s): min= 256, max= 904, per=4.32%, avg=472.00, stdev=235.82, samples=20 00:28:43.735 iops : min= 64, max= 226, avg=118.00, stdev=58.96, samples=20 00:28:43.735 lat (msec) : 50=2.26%, 100=41.89%, 250=51.17%, 500=4.68% 00:28:43.735 cpu : usr=31.08%, sys=2.29%, ctx=380, majf=0, minf=9 00:28:43.735 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:28:43.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 issued rwts: total=1196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.735 filename2: (groupid=0, jobs=1): err= 0: pid=82580: Wed May 15 09:21:54 2024 00:28:43.735 read: IOPS=109, BW=437KiB/s (448kB/s)(4372KiB/10003msec) 00:28:43.735 slat (usec): min=7, max=5062, avg=30.47, stdev=152.78 00:28:43.735 clat (msec): min=36, max=461, avg=146.24, stdev=87.40 00:28:43.735 lat (msec): min=36, max=461, avg=146.27, stdev=87.40 00:28:43.735 clat percentiles (msec): 00:28:43.735 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 66], 00:28:43.735 | 30.00th=[ 77], 40.00th=[ 86], 50.00th=[ 104], 60.00th=[ 192], 00:28:43.735 | 70.00th=[ 215], 80.00th=[ 236], 90.00th=[ 257], 95.00th=[ 266], 00:28:43.735 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 460], 99.95th=[ 464], 00:28:43.735 | 99.99th=[ 464] 00:28:43.735 bw ( KiB/s): min= 127, max= 896, per=3.73%, avg=407.53, stdev=256.26, samples=19 00:28:43.735 iops : min= 31, max= 224, avg=101.84, stdev=64.11, samples=19 00:28:43.735 lat (msec) : 50=6.50%, 100=41.63%, 250=40.16%, 500=11.71% 00:28:43.735 cpu : usr=40.37%, sys=2.91%, ctx=518, majf=0, minf=9 00:28:43.735 IO depths : 1=0.1%, 2=3.8%, 4=15.3%, 8=66.7%, 16=14.1%, 32=0.0%, >=64=0.0% 00:28:43.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 complete : 0=0.0%, 4=91.7%, 8=5.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 issued rwts: total=1093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.735 filename2: (groupid=0, jobs=1): err= 0: pid=82581: Wed May 15 09:21:54 2024 00:28:43.735 read: IOPS=120, BW=483KiB/s (494kB/s)(4840KiB/10026msec) 00:28:43.735 slat (usec): min=7, max=13063, avg=38.47, stdev=473.09 00:28:43.735 clat (msec): min=29, max=336, avg=132.29, stdev=72.04 00:28:43.735 lat (msec): min=29, max=336, avg=132.33, stdev=72.05 00:28:43.735 clat percentiles (msec): 00:28:43.735 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 63], 00:28:43.735 | 30.00th=[ 77], 40.00th=[ 89], 50.00th=[ 115], 60.00th=[ 144], 00:28:43.735 | 70.00th=[ 178], 80.00th=[ 207], 90.00th=[ 232], 95.00th=[ 255], 00:28:43.735 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 338], 00:28:43.735 | 99.99th=[ 338] 00:28:43.735 bw ( KiB/s): min= 200, max= 928, per=4.40%, avg=480.40, stdev=260.83, samples=20 00:28:43.735 iops : min= 50, max= 232, avg=120.10, stdev=65.21, samples=20 00:28:43.735 lat (msec) : 50=6.12%, 100=37.27%, 250=50.50%, 500=6.12% 00:28:43.735 cpu : usr=40.36%, sys=3.20%, ctx=507, majf=0, minf=9 00:28:43.735 IO depths : 1=0.1%, 2=0.4%, 4=2.1%, 8=81.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:43.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.735 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:43.735 00:28:43.735 Run status group 0 (all jobs): 00:28:43.735 READ: bw=10.7MiB/s (11.2MB/s), 406KiB/s-490KiB/s (416kB/s-502kB/s), io=107MiB (112MB), run=10003-10063msec 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:43.735 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 bdev_null0 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 [2024-05-15 09:21:54.576223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 bdev_null1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.736 { 00:28:43.736 "params": { 00:28:43.736 "name": "Nvme$subsystem", 00:28:43.736 "trtype": "$TEST_TRANSPORT", 00:28:43.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.736 "adrfam": "ipv4", 00:28:43.736 "trsvcid": "$NVMF_PORT", 00:28:43.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.736 "hdgst": ${hdgst:-false}, 00:28:43.736 "ddgst": ${ddgst:-false} 00:28:43.736 }, 00:28:43.736 "method": "bdev_nvme_attach_controller" 00:28:43.736 } 00:28:43.736 EOF 00:28:43.736 )") 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.736 { 00:28:43.736 "params": { 00:28:43.736 "name": "Nvme$subsystem", 00:28:43.736 "trtype": "$TEST_TRANSPORT", 00:28:43.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.736 "adrfam": "ipv4", 00:28:43.736 "trsvcid": "$NVMF_PORT", 00:28:43.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.736 "hdgst": ${hdgst:-false}, 00:28:43.736 "ddgst": ${ddgst:-false} 00:28:43.736 }, 00:28:43.736 "method": "bdev_nvme_attach_controller" 00:28:43.736 } 00:28:43.736 EOF 00:28:43.736 )") 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:43.736 "params": { 00:28:43.736 "name": "Nvme0", 00:28:43.736 "trtype": "tcp", 00:28:43.736 "traddr": "10.0.0.2", 00:28:43.736 "adrfam": "ipv4", 00:28:43.736 "trsvcid": "4420", 00:28:43.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:43.736 "hdgst": false, 00:28:43.736 "ddgst": false 00:28:43.736 }, 00:28:43.736 "method": "bdev_nvme_attach_controller" 00:28:43.736 },{ 00:28:43.736 "params": { 00:28:43.736 "name": "Nvme1", 00:28:43.736 "trtype": "tcp", 00:28:43.736 "traddr": "10.0.0.2", 00:28:43.736 "adrfam": "ipv4", 00:28:43.736 "trsvcid": "4420", 00:28:43.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.736 "hdgst": false, 00:28:43.736 "ddgst": false 00:28:43.736 }, 00:28:43.736 "method": "bdev_nvme_attach_controller" 00:28:43.736 }' 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:43.736 09:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.737 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:43.737 ... 00:28:43.737 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:43.737 ... 00:28:43.737 fio-3.35 00:28:43.737 Starting 4 threads 00:28:49.015 00:28:49.015 filename0: (groupid=0, jobs=1): err= 0: pid=82707: Wed May 15 09:22:00 2024 00:28:49.015 read: IOPS=1981, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5001msec) 00:28:49.015 slat (nsec): min=4665, max=55723, avg=15774.08, stdev=4448.62 00:28:49.015 clat (usec): min=668, max=15341, avg=3983.36, stdev=788.56 00:28:49.015 lat (usec): min=683, max=15358, avg=3999.13, stdev=788.12 00:28:49.015 clat percentiles (usec): 00:28:49.015 | 1.00th=[ 1549], 5.00th=[ 2474], 10.00th=[ 2999], 20.00th=[ 3589], 00:28:49.015 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4178], 60.00th=[ 4293], 00:28:49.015 | 70.00th=[ 4359], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4883], 00:28:49.015 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[10028], 99.95th=[11863], 00:28:49.015 | 99.99th=[15401] 00:28:49.015 bw ( KiB/s): min=14464, max=17680, per=23.28%, avg=15868.44, stdev=1144.30, samples=9 00:28:49.015 iops : min= 1808, max= 2210, avg=1983.56, stdev=143.04, samples=9 00:28:49.015 lat (usec) : 750=0.01%, 1000=0.02% 00:28:49.015 lat (msec) : 2=3.12%, 4=38.65%, 10=58.10%, 20=0.10% 00:28:49.015 cpu : usr=89.20%, sys=9.54%, ctx=125, majf=0, minf=10 00:28:49.015 IO depths : 1=0.1%, 2=15.7%, 4=56.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.015 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.015 issued rwts: total=9907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.016 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.016 filename0: (groupid=0, jobs=1): err= 0: pid=82708: Wed May 15 09:22:00 2024 00:28:49.016 read: IOPS=2143, BW=16.7MiB/s (17.6MB/s)(83.8MiB/5003msec) 00:28:49.016 slat (nsec): min=4580, max=50385, avg=14867.08, stdev=4593.84 00:28:49.016 clat (usec): min=726, max=15301, avg=3686.69, stdev=951.48 00:28:49.016 lat (usec): min=744, max=15316, avg=3701.56, stdev=951.43 00:28:49.016 clat percentiles (usec): 00:28:49.016 | 1.00th=[ 1336], 5.00th=[ 1893], 10.00th=[ 2114], 20.00th=[ 2868], 00:28:49.016 | 30.00th=[ 3392], 40.00th=[ 3720], 50.00th=[ 3949], 60.00th=[ 4113], 00:28:49.016 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4817], 00:28:49.016 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 8717], 99.95th=[10028], 00:28:49.016 | 99.99th=[11994] 00:28:49.016 bw ( KiB/s): min=14592, max=19200, per=25.43%, avg=17331.56, stdev=1466.90, samples=9 00:28:49.016 iops : min= 1824, max= 2400, avg=2166.44, stdev=183.36, samples=9 00:28:49.016 lat (usec) : 750=0.05%, 1000=0.16% 00:28:49.016 lat (msec) : 2=6.36%, 4=47.52%, 10=45.87%, 20=0.05% 00:28:49.016 cpu : usr=90.32%, sys=8.60%, ctx=9, majf=0, minf=9 00:28:49.016 IO depths : 1=0.1%, 2=9.6%, 4=59.5%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.016 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.016 issued rwts: total=10725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.016 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.016 filename1: (groupid=0, jobs=1): err= 0: pid=82709: Wed May 15 09:22:00 2024 00:28:49.016 read: IOPS=2321, BW=18.1MiB/s (19.0MB/s)(90.7MiB/5002msec) 00:28:49.016 slat (nsec): min=4425, max=48401, avg=12983.34, stdev=4814.26 00:28:49.016 clat (usec): min=791, max=15230, avg=3412.02, stdev=1028.23 00:28:49.016 lat (usec): min=801, max=15244, avg=3425.01, stdev=1027.72 00:28:49.016 clat percentiles (usec): 00:28:49.016 | 1.00th=[ 1237], 5.00th=[ 1713], 10.00th=[ 2073], 20.00th=[ 2376], 00:28:49.016 | 30.00th=[ 2835], 40.00th=[ 3195], 50.00th=[ 3687], 60.00th=[ 3884], 00:28:49.016 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4752], 00:28:49.016 | 99.00th=[ 5604], 99.50th=[ 5932], 99.90th=[ 8586], 99.95th=[10028], 00:28:49.016 | 99.99th=[12125] 00:28:49.016 bw ( KiB/s): min=16080, max=20464, per=27.26%, avg=18576.89, stdev=1744.41, samples=9 00:28:49.016 iops : min= 2010, max= 2558, avg=2322.11, stdev=218.05, samples=9 00:28:49.016 lat (usec) : 1000=0.16% 00:28:49.016 lat (msec) : 2=7.94%, 4=58.30%, 10=33.54%, 20=0.06% 00:28:49.016 cpu : usr=89.18%, sys=9.52%, ctx=37, majf=0, minf=9 00:28:49.016 IO depths : 1=0.1%, 2=3.8%, 4=62.6%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.016 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.016 issued rwts: total=11611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.016 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.016 filename1: (groupid=0, jobs=1): err= 0: pid=82710: Wed May 15 09:22:00 2024 00:28:49.016 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5001msec) 00:28:49.016 slat (nsec): min=5212, max=46665, avg=14172.36, stdev=4427.58 00:28:49.016 clat (usec): min=720, max=15319, avg=3809.08, stdev=972.74 00:28:49.016 lat (usec): min=730, max=15333, avg=3823.26, stdev=972.90 00:28:49.016 clat percentiles (usec): 00:28:49.016 | 1.00th=[ 1303], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 3097], 00:28:49.016 | 30.00th=[ 3654], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4178], 00:28:49.016 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4948], 00:28:49.016 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 8717], 99.95th=[10159], 00:28:49.016 | 99.99th=[11863] 00:28:49.016 bw ( KiB/s): min=14400, max=20128, per=24.02%, avg=16374.78, stdev=1681.97, samples=9 00:28:49.016 iops : min= 1800, max= 2516, avg=2046.78, stdev=210.32, samples=9 00:28:49.016 lat (usec) : 750=0.13%, 1000=0.27% 00:28:49.016 lat (msec) : 2=3.26%, 4=46.06%, 10=50.20%, 20=0.09% 00:28:49.016 cpu : usr=89.42%, sys=9.64%, ctx=16, majf=0, minf=9 00:28:49.016 IO depths : 1=0.1%, 2=12.1%, 4=58.1%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.016 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.016 issued rwts: total=10380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.016 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.016 00:28:49.016 Run status group 0 (all jobs): 00:28:49.016 READ: bw=66.6MiB/s (69.8MB/s), 15.5MiB/s-18.1MiB/s (16.2MB/s-19.0MB/s), io=333MiB (349MB), run=5001-5003msec 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 00:28:49.016 real 0m23.614s 00:28:49.016 user 2m0.219s 00:28:49.016 sys 0m11.157s 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 ************************************ 00:28:49.016 END TEST fio_dif_rand_params 00:28:49.016 ************************************ 00:28:49.016 09:22:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:49.016 09:22:00 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:49.016 09:22:00 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 ************************************ 00:28:49.016 START TEST fio_dif_digest 00:28:49.016 ************************************ 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 bdev_null0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.016 [2024-05-15 09:22:00.771923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:49.016 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.017 { 00:28:49.017 "params": { 00:28:49.017 "name": "Nvme$subsystem", 00:28:49.017 "trtype": "$TEST_TRANSPORT", 00:28:49.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.017 "adrfam": "ipv4", 00:28:49.017 "trsvcid": "$NVMF_PORT", 00:28:49.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.017 "hdgst": ${hdgst:-false}, 00:28:49.017 "ddgst": ${ddgst:-false} 00:28:49.017 }, 00:28:49.017 "method": "bdev_nvme_attach_controller" 00:28:49.017 } 00:28:49.017 EOF 00:28:49.017 )") 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:49.017 "params": { 00:28:49.017 "name": "Nvme0", 00:28:49.017 "trtype": "tcp", 00:28:49.017 "traddr": "10.0.0.2", 00:28:49.017 "adrfam": "ipv4", 00:28:49.017 "trsvcid": "4420", 00:28:49.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:49.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:49.017 "hdgst": true, 00:28:49.017 "ddgst": true 00:28:49.017 }, 00:28:49.017 "method": "bdev_nvme_attach_controller" 00:28:49.017 }' 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:49.017 09:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.017 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:49.017 ... 00:28:49.017 fio-3.35 00:28:49.017 Starting 3 threads 00:29:01.229 00:29:01.229 filename0: (groupid=0, jobs=1): err= 0: pid=82812: Wed May 15 09:22:11 2024 00:29:01.229 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10003msec) 00:29:01.229 slat (usec): min=7, max=305, avg=26.27, stdev=14.50 00:29:01.229 clat (usec): min=12179, max=20338, avg=13253.46, stdev=859.80 00:29:01.229 lat (usec): min=12196, max=20370, avg=13279.73, stdev=859.21 00:29:01.229 clat percentiles (usec): 00:29:01.229 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12780], 20.00th=[12780], 00:29:01.229 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:29:01.229 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14484], 95.00th=[15533], 00:29:01.229 | 99.00th=[16188], 99.50th=[16581], 99.90th=[20317], 99.95th=[20317], 00:29:01.229 | 99.99th=[20317] 00:29:01.229 bw ( KiB/s): min=26880, max=29952, per=33.32%, avg=28805.80, stdev=847.94, samples=20 00:29:01.229 iops : min= 210, max= 234, avg=225.00, stdev= 6.60, samples=20 00:29:01.229 lat (msec) : 20=99.87%, 50=0.13% 00:29:01.229 cpu : usr=88.53%, sys=10.02%, ctx=88, majf=0, minf=9 00:29:01.229 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.229 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:01.229 filename0: (groupid=0, jobs=1): err= 0: pid=82813: Wed May 15 09:22:11 2024 00:29:01.229 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(282MiB/10007msec) 00:29:01.229 slat (nsec): min=4742, max=73329, avg=24465.06, stdev=12278.32 00:29:01.229 clat (usec): min=12234, max=20208, avg=13264.07, stdev=876.16 00:29:01.229 lat (usec): min=12255, max=20240, avg=13288.53, stdev=876.12 00:29:01.229 clat percentiles (usec): 00:29:01.229 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12780], 20.00th=[12780], 00:29:01.229 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:29:01.229 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14484], 95.00th=[15533], 00:29:01.229 | 99.00th=[16188], 99.50th=[16712], 99.90th=[20055], 99.95th=[20317], 00:29:01.229 | 99.99th=[20317] 00:29:01.229 bw ( KiB/s): min=26880, max=29952, per=33.31%, avg=28800.00, stdev=768.00, samples=20 00:29:01.229 iops : min= 210, max= 234, avg=225.00, stdev= 6.00, samples=20 00:29:01.229 lat (msec) : 20=99.87%, 50=0.13% 00:29:01.229 cpu : usr=88.77%, sys=9.98%, ctx=250, majf=0, minf=0 00:29:01.229 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.229 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:01.229 filename0: (groupid=0, jobs=1): err= 0: pid=82814: Wed May 15 09:22:11 2024 00:29:01.229 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(282MiB/10005msec) 00:29:01.229 slat (usec): min=4, max=684, avg=26.26, stdev=18.78 00:29:01.229 clat (usec): min=12109, max=20344, avg=13257.55, stdev=867.38 00:29:01.229 lat (usec): min=12211, max=20380, avg=13283.81, stdev=866.62 00:29:01.229 clat percentiles (usec): 00:29:01.229 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12780], 20.00th=[12780], 00:29:01.229 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:29:01.229 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14484], 95.00th=[15533], 00:29:01.229 | 99.00th=[16188], 99.50th=[16581], 99.90th=[20317], 99.95th=[20317], 00:29:01.229 | 99.99th=[20317] 00:29:01.229 bw ( KiB/s): min=26880, max=29952, per=33.31%, avg=28802.90, stdev=769.63, samples=20 00:29:01.229 iops : min= 210, max= 234, avg=225.00, stdev= 6.00, samples=20 00:29:01.229 lat (msec) : 20=99.87%, 50=0.13% 00:29:01.229 cpu : usr=89.38%, sys=9.54%, ctx=47, majf=0, minf=0 00:29:01.229 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.229 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:01.229 00:29:01.229 Run status group 0 (all jobs): 00:29:01.229 READ: bw=84.4MiB/s (88.5MB/s), 28.1MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=845MiB (886MB), run=10003-10007msec 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.229 ************************************ 00:29:01.229 END TEST fio_dif_digest 00:29:01.229 ************************************ 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.229 00:29:01.229 real 0m11.002s 00:29:01.229 user 0m27.334s 00:29:01.229 sys 0m3.229s 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:01.229 09:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.229 09:22:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:01.229 09:22:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:01.229 rmmod nvme_tcp 00:29:01.229 rmmod nvme_fabrics 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82074 ']' 00:29:01.229 09:22:11 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82074 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 82074 ']' 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 82074 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 82074 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 82074' 00:29:01.230 killing process with pid 82074 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@966 -- # kill 82074 00:29:01.230 [2024-05-15 09:22:11.872879] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:01.230 09:22:11 nvmf_dif -- common/autotest_common.sh@971 -- # wait 82074 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:01.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:01.230 Waiting for block devices as requested 00:29:01.230 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:01.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.230 09:22:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:01.230 09:22:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.230 09:22:12 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:01.230 ************************************ 00:29:01.230 END TEST nvmf_dif 00:29:01.230 ************************************ 00:29:01.230 00:29:01.230 real 1m0.068s 00:29:01.230 user 3m43.715s 00:29:01.230 sys 0m23.775s 00:29:01.230 09:22:12 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:01.230 09:22:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:01.230 09:22:12 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:01.230 09:22:12 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:01.230 09:22:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:01.230 09:22:12 -- common/autotest_common.sh@10 -- # set +x 00:29:01.230 ************************************ 00:29:01.230 START TEST nvmf_abort_qd_sizes 00:29:01.230 ************************************ 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:01.230 * Looking for test storage... 00:29:01.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.230 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:01.231 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:01.231 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:01.231 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:01.231 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:01.231 09:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:01.231 Cannot find device "nvmf_tgt_br" 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:01.231 Cannot find device "nvmf_tgt_br2" 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:01.231 Cannot find device "nvmf_tgt_br" 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:01.231 Cannot find device "nvmf_tgt_br2" 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:01.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:01.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:01.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:29:01.231 00:29:01.231 --- 10.0.0.2 ping statistics --- 00:29:01.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.231 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:01.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:01.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:29:01.231 00:29:01.231 --- 10.0.0.3 ping statistics --- 00:29:01.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.231 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:01.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:29:01.231 00:29:01.231 --- 10.0.0.1 ping statistics --- 00:29:01.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.231 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:01.231 09:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:01.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:01.797 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:01.797 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:01.797 09:22:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:01.798 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:01.798 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:01.798 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83406 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83406 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 83406 ']' 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:02.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:02.056 09:22:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:02.056 [2024-05-15 09:22:14.292003] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:29:02.056 [2024-05-15 09:22:14.292106] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.056 [2024-05-15 09:22:14.437725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.314 [2024-05-15 09:22:14.560332] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.314 [2024-05-15 09:22:14.560411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.314 [2024-05-15 09:22:14.560426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.314 [2024-05-15 09:22:14.560440] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.314 [2024-05-15 09:22:14.560451] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.314 [2024-05-15 09:22:14.560637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.314 [2024-05-15 09:22:14.560785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.314 [2024-05-15 09:22:14.561495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.314 [2024-05-15 09:22:14.561500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.880 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:02.880 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:29:02.880 09:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:02.880 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:02.880 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:02.880 09:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:29:02.881 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:03.140 09:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.140 ************************************ 00:29:03.140 START TEST spdk_target_abort 00:29:03.140 ************************************ 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.140 spdk_targetn1 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.140 [2024-05-15 09:22:15.413071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:03.140 [2024-05-15 09:22:15.441024] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:03.140 [2024-05-15 09:22:15.441330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:03.140 09:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:06.425 Initializing NVMe Controllers 00:29:06.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:06.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:06.426 Initialization complete. Launching workers. 00:29:06.426 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12134, failed: 0 00:29:06.426 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1091, failed to submit 11043 00:29:06.426 success 924, unsuccess 167, failed 0 00:29:06.426 09:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:06.426 09:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:09.774 Initializing NVMe Controllers 00:29:09.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:09.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:09.774 Initialization complete. Launching workers. 00:29:09.774 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6912, failed: 0 00:29:09.774 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1152, failed to submit 5760 00:29:09.774 success 284, unsuccess 868, failed 0 00:29:09.774 09:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:09.774 09:22:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:13.074 Initializing NVMe Controllers 00:29:13.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:13.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:13.074 Initialization complete. Launching workers. 00:29:13.074 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29249, failed: 0 00:29:13.074 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2284, failed to submit 26965 00:29:13.074 success 403, unsuccess 1881, failed 0 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.074 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83406 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 83406 ']' 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 83406 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83406 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:13.332 killing process with pid 83406 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83406' 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 83406 00:29:13.332 [2024-05-15 09:22:25.760287] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:13.332 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 83406 00:29:13.590 00:29:13.590 real 0m10.654s 00:29:13.590 user 0m41.950s 00:29:13.590 sys 0m2.713s 00:29:13.590 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:13.590 09:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:13.590 ************************************ 00:29:13.590 END TEST spdk_target_abort 00:29:13.590 ************************************ 00:29:13.848 09:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:13.848 09:22:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:13.848 09:22:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:13.848 09:22:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:13.848 ************************************ 00:29:13.848 START TEST kernel_target_abort 00:29:13.848 ************************************ 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.848 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:13.849 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:14.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.107 Waiting for block devices as requested 00:29:14.107 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:14.370 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:14.370 No valid GPT data, bailing 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:29:14.370 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:14.634 No valid GPT data, bailing 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:14.634 No valid GPT data, bailing 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:14.634 No valid GPT data, bailing 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:14.634 09:22:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b --hostid=c738663f-2662-4398-b539-15f14394251b -a 10.0.0.1 -t tcp -s 4420 00:29:14.634 00:29:14.634 Discovery Log Number of Records 2, Generation counter 2 00:29:14.634 =====Discovery Log Entry 0====== 00:29:14.634 trtype: tcp 00:29:14.634 adrfam: ipv4 00:29:14.634 subtype: current discovery subsystem 00:29:14.634 treq: not specified, sq flow control disable supported 00:29:14.634 portid: 1 00:29:14.634 trsvcid: 4420 00:29:14.634 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:14.634 traddr: 10.0.0.1 00:29:14.634 eflags: none 00:29:14.634 sectype: none 00:29:14.634 =====Discovery Log Entry 1====== 00:29:14.634 trtype: tcp 00:29:14.634 adrfam: ipv4 00:29:14.634 subtype: nvme subsystem 00:29:14.634 treq: not specified, sq flow control disable supported 00:29:14.634 portid: 1 00:29:14.634 trsvcid: 4420 00:29:14.634 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:14.634 traddr: 10.0.0.1 00:29:14.634 eflags: none 00:29:14.634 sectype: none 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:14.634 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.635 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.635 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.635 09:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:17.920 Initializing NVMe Controllers 00:29:17.920 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.920 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.920 Initialization complete. Launching workers. 00:29:17.920 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40003, failed: 0 00:29:17.920 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40003, failed to submit 0 00:29:17.920 success 0, unsuccess 40003, failed 0 00:29:17.920 09:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:17.920 09:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.206 Initializing NVMe Controllers 00:29:21.206 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:21.206 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:21.206 Initialization complete. Launching workers. 00:29:21.206 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75523, failed: 0 00:29:21.206 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33037, failed to submit 42486 00:29:21.206 success 0, unsuccess 33037, failed 0 00:29:21.206 09:22:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:21.206 09:22:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:24.531 Initializing NVMe Controllers 00:29:24.531 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:24.531 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:24.531 Initialization complete. Launching workers. 00:29:24.531 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89994, failed: 0 00:29:24.531 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22502, failed to submit 67492 00:29:24.531 success 0, unsuccess 22502, failed 0 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:24.531 09:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:25.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.703 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:27.703 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:27.703 00:29:27.704 real 0m13.650s 00:29:27.704 user 0m6.539s 00:29:27.704 sys 0m4.612s 00:29:27.704 09:22:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:27.704 ************************************ 00:29:27.704 END TEST kernel_target_abort 00:29:27.704 ************************************ 00:29:27.704 09:22:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.704 rmmod nvme_tcp 00:29:27.704 rmmod nvme_fabrics 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83406 ']' 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83406 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 83406 ']' 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 83406 00:29:27.704 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (83406) - No such process 00:29:27.704 Process with pid 83406 is not found 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 83406 is not found' 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:27.704 09:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:27.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.961 Waiting for block devices as requested 00:29:27.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:28.219 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:28.219 00:29:28.219 real 0m27.695s 00:29:28.219 user 0m49.682s 00:29:28.219 sys 0m8.847s 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:28.219 09:22:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:28.219 ************************************ 00:29:28.219 END TEST nvmf_abort_qd_sizes 00:29:28.219 ************************************ 00:29:28.219 09:22:40 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:28.219 09:22:40 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:28.219 09:22:40 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:28.219 09:22:40 -- common/autotest_common.sh@10 -- # set +x 00:29:28.219 ************************************ 00:29:28.219 START TEST keyring_file 00:29:28.219 ************************************ 00:29:28.219 09:22:40 keyring_file -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:28.500 * Looking for test storage... 00:29:28.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c738663f-2662-4398-b539-15f14394251b 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=c738663f-2662-4398-b539-15f14394251b 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:28.500 09:22:40 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.500 09:22:40 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.500 09:22:40 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.500 09:22:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.500 09:22:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.500 09:22:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.500 09:22:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:28.500 09:22:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ClY7d3gKFK 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ClY7d3gKFK 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ClY7d3gKFK 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ClY7d3gKFK 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vjAgxV7hC7 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:28.500 09:22:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vjAgxV7hC7 00:29:28.500 09:22:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vjAgxV7hC7 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vjAgxV7hC7 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=84277 00:29:28.500 09:22:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84277 00:29:28.500 09:22:40 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 84277 ']' 00:29:28.500 09:22:40 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.500 09:22:40 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:28.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.500 09:22:40 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.500 09:22:40 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:28.500 09:22:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:28.500 [2024-05-15 09:22:40.926995] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:29:28.500 [2024-05-15 09:22:40.927139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84277 ] 00:29:28.779 [2024-05-15 09:22:41.072229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.779 [2024-05-15 09:22:41.179250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.716 09:22:41 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:29.716 09:22:41 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:29.716 09:22:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:29.716 09:22:41 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.716 09:22:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:29.716 [2024-05-15 09:22:41.966161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.716 null0 00:29:29.716 [2024-05-15 09:22:41.998098] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:29.716 [2024-05-15 09:22:41.998194] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:29.716 [2024-05-15 09:22:41.998417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:29.716 [2024-05-15 09:22:42.006173] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.716 09:22:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:29.716 [2024-05-15 09:22:42.022158] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:29.716 request: 00:29:29.716 { 00:29:29.716 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.716 "secure_channel": false, 00:29:29.716 "listen_address": { 00:29:29.716 "trtype": "tcp", 00:29:29.716 "traddr": "127.0.0.1", 00:29:29.716 "trsvcid": "4420" 00:29:29.716 }, 00:29:29.716 "method": "nvmf_subsystem_add_listener", 00:29:29.716 "req_id": 1 00:29:29.716 } 00:29:29.716 Got JSON-RPC error response 00:29:29.716 response: 00:29:29.716 { 00:29:29.716 "code": -32602, 00:29:29.716 "message": "Invalid parameters" 00:29:29.716 } 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:29.716 09:22:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=84294 00:29:29.716 09:22:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84294 /var/tmp/bperf.sock 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 84294 ']' 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:29.716 09:22:42 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:29.716 09:22:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:29.716 [2024-05-15 09:22:42.090685] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:29:29.716 [2024-05-15 09:22:42.090799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84294 ] 00:29:29.974 [2024-05-15 09:22:42.234451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.974 [2024-05-15 09:22:42.342095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.905 09:22:43 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:30.905 09:22:43 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:30.905 09:22:43 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:30.905 09:22:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:30.905 09:22:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vjAgxV7hC7 00:29:30.905 09:22:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vjAgxV7hC7 00:29:31.163 09:22:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:31.163 09:22:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:31.163 09:22:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.163 09:22:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.163 09:22:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:31.421 09:22:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ClY7d3gKFK == \/\t\m\p\/\t\m\p\.\C\l\Y\7\d\3\g\K\F\K ]] 00:29:31.422 09:22:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:31.422 09:22:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:31.422 09:22:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.422 09:22:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.422 09:22:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.001 09:22:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vjAgxV7hC7 == \/\t\m\p\/\t\m\p\.\v\j\A\g\x\V\7\h\C\7 ]] 00:29:32.001 09:22:44 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.001 09:22:44 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:32.001 09:22:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.001 09:22:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.566 09:22:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:32.566 09:22:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.566 09:22:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.566 [2024-05-15 09:22:44.920706] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.566 nvme0n1 00:29:32.824 09:22:45 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.824 09:22:45 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:32.824 09:22:45 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.824 09:22:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:33.389 09:22:45 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:33.389 09:22:45 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.389 Running I/O for 1 seconds... 00:29:34.782 00:29:34.782 Latency(us) 00:29:34.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.782 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:34.782 nvme0n1 : 1.01 12214.14 47.71 0.00 0.00 10448.93 5055.63 17975.59 00:29:34.782 =================================================================================================================== 00:29:34.782 Total : 12214.14 47.71 0.00 0.00 10448.93 5055.63 17975.59 00:29:34.782 0 00:29:34.782 09:22:46 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:34.782 09:22:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:34.782 09:22:47 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:34.782 09:22:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.782 09:22:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.782 09:22:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.782 09:22:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.783 09:22:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.073 09:22:47 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:35.074 09:22:47 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:35.074 09:22:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:35.074 09:22:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.074 09:22:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.074 09:22:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.074 09:22:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.331 09:22:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:35.331 09:22:47 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.331 09:22:47 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:35.332 09:22:47 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.332 09:22:47 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:29:35.332 09:22:47 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:35.332 09:22:47 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:29:35.332 09:22:47 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:35.332 09:22:47 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.332 09:22:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.589 [2024-05-15 09:22:47.864009] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:35.589 [2024-05-15 09:22:47.864234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6ff10 (107): Transport endpoint is not connected 00:29:35.589 [2024-05-15 09:22:47.865220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6ff10 (9): Bad file descriptor 00:29:35.589 [2024-05-15 09:22:47.866218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:35.589 [2024-05-15 09:22:47.866244] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:35.589 [2024-05-15 09:22:47.866256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:35.589 request: 00:29:35.589 { 00:29:35.589 "name": "nvme0", 00:29:35.589 "trtype": "tcp", 00:29:35.589 "traddr": "127.0.0.1", 00:29:35.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.589 "adrfam": "ipv4", 00:29:35.589 "trsvcid": "4420", 00:29:35.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.589 "psk": "key1", 00:29:35.589 "method": "bdev_nvme_attach_controller", 00:29:35.589 "req_id": 1 00:29:35.589 } 00:29:35.589 Got JSON-RPC error response 00:29:35.589 response: 00:29:35.589 { 00:29:35.589 "code": -32602, 00:29:35.589 "message": "Invalid parameters" 00:29:35.589 } 00:29:35.589 09:22:47 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:35.589 09:22:47 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:35.589 09:22:47 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:35.589 09:22:47 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:35.589 09:22:47 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:35.589 09:22:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:35.589 09:22:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.589 09:22:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:35.589 09:22:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.589 09:22:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.155 09:22:48 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:36.155 09:22:48 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:36.155 09:22:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:36.155 09:22:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.155 09:22:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.155 09:22:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:36.155 09:22:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.155 09:22:48 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:36.155 09:22:48 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:36.155 09:22:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:36.413 09:22:48 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:36.413 09:22:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:36.979 09:22:49 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:36.979 09:22:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.979 09:22:49 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:36.979 09:22:49 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:36.979 09:22:49 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ClY7d3gKFK 00:29:36.979 09:22:49 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:36.979 09:22:49 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:36.979 09:22:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:37.237 [2024-05-15 09:22:49.653734] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ClY7d3gKFK': 0100660 00:29:37.237 [2024-05-15 09:22:49.654114] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:37.237 request: 00:29:37.237 { 00:29:37.237 "name": "key0", 00:29:37.237 "path": "/tmp/tmp.ClY7d3gKFK", 00:29:37.237 "method": "keyring_file_add_key", 00:29:37.237 "req_id": 1 00:29:37.237 } 00:29:37.237 Got JSON-RPC error response 00:29:37.237 response: 00:29:37.237 { 00:29:37.237 "code": -1, 00:29:37.237 "message": "Operation not permitted" 00:29:37.237 } 00:29:37.237 09:22:49 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:37.237 09:22:49 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:37.237 09:22:49 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:37.237 09:22:49 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:37.237 09:22:49 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ClY7d3gKFK 00:29:37.237 09:22:49 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:37.237 09:22:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ClY7d3gKFK 00:29:37.494 09:22:49 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ClY7d3gKFK 00:29:37.494 09:22:49 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:37.494 09:22:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.494 09:22:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.494 09:22:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.494 09:22:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.494 09:22:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.058 09:22:50 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:38.058 09:22:50 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:38.058 09:22:50 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.058 09:22:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.316 [2024-05-15 09:22:50.549913] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ClY7d3gKFK': No such file or directory 00:29:38.316 [2024-05-15 09:22:50.550279] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:38.316 [2024-05-15 09:22:50.550446] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:38.316 [2024-05-15 09:22:50.550567] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:38.316 [2024-05-15 09:22:50.550619] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:38.316 request: 00:29:38.316 { 00:29:38.316 "name": "nvme0", 00:29:38.316 "trtype": "tcp", 00:29:38.316 "traddr": "127.0.0.1", 00:29:38.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:38.316 "adrfam": "ipv4", 00:29:38.316 "trsvcid": "4420", 00:29:38.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.316 "psk": "key0", 00:29:38.316 "method": "bdev_nvme_attach_controller", 00:29:38.316 "req_id": 1 00:29:38.316 } 00:29:38.316 Got JSON-RPC error response 00:29:38.316 response: 00:29:38.316 { 00:29:38.316 "code": -19, 00:29:38.316 "message": "No such device" 00:29:38.316 } 00:29:38.316 09:22:50 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:29:38.316 09:22:50 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:38.316 09:22:50 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:38.316 09:22:50 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:38.316 09:22:50 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:38.316 09:22:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:38.574 09:22:50 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oApIAoT2p1 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:38.574 09:22:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:38.574 09:22:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:38.574 09:22:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:38.574 09:22:50 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:38.574 09:22:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:38.574 09:22:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oApIAoT2p1 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oApIAoT2p1 00:29:38.574 09:22:50 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.oApIAoT2p1 00:29:38.574 09:22:50 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oApIAoT2p1 00:29:38.574 09:22:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oApIAoT2p1 00:29:38.832 09:22:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.832 09:22:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.398 nvme0n1 00:29:39.398 09:22:51 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:39.398 09:22:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:39.398 09:22:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:39.398 09:22:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.398 09:22:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.398 09:22:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:39.655 09:22:51 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:39.655 09:22:51 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:39.655 09:22:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:39.913 09:22:52 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:39.913 09:22:52 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:39.913 09:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.913 09:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.913 09:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.170 09:22:52 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:40.170 09:22:52 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:40.170 09:22:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.170 09:22:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.170 09:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.170 09:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.170 09:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.427 09:22:52 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:40.427 09:22:52 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:40.427 09:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:40.684 09:22:53 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:40.684 09:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.684 09:22:53 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:40.940 09:22:53 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:40.940 09:22:53 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oApIAoT2p1 00:29:40.940 09:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oApIAoT2p1 00:29:41.505 09:22:53 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vjAgxV7hC7 00:29:41.505 09:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vjAgxV7hC7 00:29:41.763 09:22:54 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:41.763 09:22:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:42.021 nvme0n1 00:29:42.021 09:22:54 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:42.021 09:22:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:42.279 09:22:54 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:42.279 "subsystems": [ 00:29:42.279 { 00:29:42.279 "subsystem": "keyring", 00:29:42.279 "config": [ 00:29:42.279 { 00:29:42.279 "method": "keyring_file_add_key", 00:29:42.279 "params": { 00:29:42.279 "name": "key0", 00:29:42.279 "path": "/tmp/tmp.oApIAoT2p1" 00:29:42.279 } 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "method": "keyring_file_add_key", 00:29:42.279 "params": { 00:29:42.279 "name": "key1", 00:29:42.279 "path": "/tmp/tmp.vjAgxV7hC7" 00:29:42.279 } 00:29:42.279 } 00:29:42.279 ] 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "subsystem": "iobuf", 00:29:42.279 "config": [ 00:29:42.279 { 00:29:42.279 "method": "iobuf_set_options", 00:29:42.279 "params": { 00:29:42.279 "small_pool_count": 8192, 00:29:42.279 "large_pool_count": 1024, 00:29:42.279 "small_bufsize": 8192, 00:29:42.279 "large_bufsize": 135168 00:29:42.279 } 00:29:42.279 } 00:29:42.279 ] 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "subsystem": "sock", 00:29:42.279 "config": [ 00:29:42.279 { 00:29:42.279 "method": "sock_impl_set_options", 00:29:42.279 "params": { 00:29:42.279 "impl_name": "uring", 00:29:42.279 "recv_buf_size": 2097152, 00:29:42.279 "send_buf_size": 2097152, 00:29:42.279 "enable_recv_pipe": true, 00:29:42.279 "enable_quickack": false, 00:29:42.279 "enable_placement_id": 0, 00:29:42.279 "enable_zerocopy_send_server": false, 00:29:42.279 "enable_zerocopy_send_client": false, 00:29:42.279 "zerocopy_threshold": 0, 00:29:42.279 "tls_version": 0, 00:29:42.279 "enable_ktls": false 00:29:42.279 } 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "method": "sock_impl_set_options", 00:29:42.279 "params": { 00:29:42.279 "impl_name": "posix", 00:29:42.279 "recv_buf_size": 2097152, 00:29:42.279 "send_buf_size": 2097152, 00:29:42.279 "enable_recv_pipe": true, 00:29:42.279 "enable_quickack": false, 00:29:42.279 "enable_placement_id": 0, 00:29:42.279 "enable_zerocopy_send_server": true, 00:29:42.279 "enable_zerocopy_send_client": false, 00:29:42.279 "zerocopy_threshold": 0, 00:29:42.279 "tls_version": 0, 00:29:42.279 "enable_ktls": false 00:29:42.279 } 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "method": "sock_impl_set_options", 00:29:42.279 "params": { 00:29:42.279 "impl_name": "ssl", 00:29:42.279 "recv_buf_size": 4096, 00:29:42.279 "send_buf_size": 4096, 00:29:42.279 "enable_recv_pipe": true, 00:29:42.279 "enable_quickack": false, 00:29:42.279 "enable_placement_id": 0, 00:29:42.279 "enable_zerocopy_send_server": true, 00:29:42.279 "enable_zerocopy_send_client": false, 00:29:42.279 "zerocopy_threshold": 0, 00:29:42.279 "tls_version": 0, 00:29:42.279 "enable_ktls": false 00:29:42.279 } 00:29:42.279 } 00:29:42.279 ] 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "subsystem": "vmd", 00:29:42.279 "config": [] 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "subsystem": "accel", 00:29:42.279 "config": [ 00:29:42.279 { 00:29:42.279 "method": "accel_set_options", 00:29:42.279 "params": { 00:29:42.279 "small_cache_size": 128, 00:29:42.279 "large_cache_size": 16, 00:29:42.279 "task_count": 2048, 00:29:42.279 "sequence_count": 2048, 00:29:42.279 "buf_count": 2048 00:29:42.279 } 00:29:42.279 } 00:29:42.279 ] 00:29:42.279 }, 00:29:42.279 { 00:29:42.279 "subsystem": "bdev", 00:29:42.279 "config": [ 00:29:42.279 { 00:29:42.279 "method": "bdev_set_options", 00:29:42.279 "params": { 00:29:42.279 "bdev_io_pool_size": 65535, 00:29:42.279 "bdev_io_cache_size": 256, 00:29:42.279 "bdev_auto_examine": true, 00:29:42.279 "iobuf_small_cache_size": 128, 00:29:42.279 "iobuf_large_cache_size": 16 00:29:42.280 } 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "method": "bdev_raid_set_options", 00:29:42.280 "params": { 00:29:42.280 "process_window_size_kb": 1024 00:29:42.280 } 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "method": "bdev_iscsi_set_options", 00:29:42.280 "params": { 00:29:42.280 "timeout_sec": 30 00:29:42.280 } 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "method": "bdev_nvme_set_options", 00:29:42.280 "params": { 00:29:42.280 "action_on_timeout": "none", 00:29:42.280 "timeout_us": 0, 00:29:42.280 "timeout_admin_us": 0, 00:29:42.280 "keep_alive_timeout_ms": 10000, 00:29:42.280 "arbitration_burst": 0, 00:29:42.280 "low_priority_weight": 0, 00:29:42.280 "medium_priority_weight": 0, 00:29:42.280 "high_priority_weight": 0, 00:29:42.280 "nvme_adminq_poll_period_us": 10000, 00:29:42.280 "nvme_ioq_poll_period_us": 0, 00:29:42.280 "io_queue_requests": 512, 00:29:42.280 "delay_cmd_submit": true, 00:29:42.280 "transport_retry_count": 4, 00:29:42.280 "bdev_retry_count": 3, 00:29:42.280 "transport_ack_timeout": 0, 00:29:42.280 "ctrlr_loss_timeout_sec": 0, 00:29:42.280 "reconnect_delay_sec": 0, 00:29:42.280 "fast_io_fail_timeout_sec": 0, 00:29:42.280 "disable_auto_failback": false, 00:29:42.280 "generate_uuids": false, 00:29:42.280 "transport_tos": 0, 00:29:42.280 "nvme_error_stat": false, 00:29:42.280 "rdma_srq_size": 0, 00:29:42.280 "io_path_stat": false, 00:29:42.280 "allow_accel_sequence": false, 00:29:42.280 "rdma_max_cq_size": 0, 00:29:42.280 "rdma_cm_event_timeout_ms": 0, 00:29:42.280 "dhchap_digests": [ 00:29:42.280 "sha256", 00:29:42.280 "sha384", 00:29:42.280 "sha512" 00:29:42.280 ], 00:29:42.280 "dhchap_dhgroups": [ 00:29:42.280 "null", 00:29:42.280 "ffdhe2048", 00:29:42.280 "ffdhe3072", 00:29:42.280 "ffdhe4096", 00:29:42.280 "ffdhe6144", 00:29:42.280 "ffdhe8192" 00:29:42.280 ] 00:29:42.280 } 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "method": "bdev_nvme_attach_controller", 00:29:42.280 "params": { 00:29:42.280 "name": "nvme0", 00:29:42.280 "trtype": "TCP", 00:29:42.280 "adrfam": "IPv4", 00:29:42.280 "traddr": "127.0.0.1", 00:29:42.280 "trsvcid": "4420", 00:29:42.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.280 "prchk_reftag": false, 00:29:42.280 "prchk_guard": false, 00:29:42.280 "ctrlr_loss_timeout_sec": 0, 00:29:42.280 "reconnect_delay_sec": 0, 00:29:42.280 "fast_io_fail_timeout_sec": 0, 00:29:42.280 "psk": "key0", 00:29:42.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.280 "hdgst": false, 00:29:42.280 "ddgst": false 00:29:42.280 } 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "method": "bdev_nvme_set_hotplug", 00:29:42.280 "params": { 00:29:42.280 "period_us": 100000, 00:29:42.280 "enable": false 00:29:42.280 } 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "method": "bdev_wait_for_examine" 00:29:42.280 } 00:29:42.280 ] 00:29:42.280 }, 00:29:42.280 { 00:29:42.280 "subsystem": "nbd", 00:29:42.280 "config": [] 00:29:42.280 } 00:29:42.280 ] 00:29:42.280 }' 00:29:42.280 09:22:54 keyring_file -- keyring/file.sh@114 -- # killprocess 84294 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 84294 ']' 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@951 -- # kill -0 84294 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@952 -- # uname 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84294 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84294' 00:29:42.280 killing process with pid 84294 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@966 -- # kill 84294 00:29:42.280 Received shutdown signal, test time was about 1.000000 seconds 00:29:42.280 00:29:42.280 Latency(us) 00:29:42.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.280 =================================================================================================================== 00:29:42.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.280 09:22:54 keyring_file -- common/autotest_common.sh@971 -- # wait 84294 00:29:42.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.538 09:22:54 keyring_file -- keyring/file.sh@117 -- # bperfpid=84556 00:29:42.538 09:22:54 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84556 /var/tmp/bperf.sock 00:29:42.538 09:22:54 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 84556 ']' 00:29:42.538 09:22:54 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.538 09:22:54 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:42.538 09:22:54 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:42.538 09:22:54 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.538 09:22:54 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:42.538 09:22:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:42.538 09:22:54 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:42.538 "subsystems": [ 00:29:42.538 { 00:29:42.538 "subsystem": "keyring", 00:29:42.538 "config": [ 00:29:42.538 { 00:29:42.538 "method": "keyring_file_add_key", 00:29:42.538 "params": { 00:29:42.538 "name": "key0", 00:29:42.538 "path": "/tmp/tmp.oApIAoT2p1" 00:29:42.538 } 00:29:42.538 }, 00:29:42.538 { 00:29:42.538 "method": "keyring_file_add_key", 00:29:42.538 "params": { 00:29:42.538 "name": "key1", 00:29:42.538 "path": "/tmp/tmp.vjAgxV7hC7" 00:29:42.538 } 00:29:42.538 } 00:29:42.538 ] 00:29:42.538 }, 00:29:42.538 { 00:29:42.538 "subsystem": "iobuf", 00:29:42.538 "config": [ 00:29:42.538 { 00:29:42.538 "method": "iobuf_set_options", 00:29:42.538 "params": { 00:29:42.538 "small_pool_count": 8192, 00:29:42.538 "large_pool_count": 1024, 00:29:42.538 "small_bufsize": 8192, 00:29:42.539 "large_bufsize": 135168 00:29:42.539 } 00:29:42.539 } 00:29:42.539 ] 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "subsystem": "sock", 00:29:42.539 "config": [ 00:29:42.539 { 00:29:42.539 "method": "sock_impl_set_options", 00:29:42.539 "params": { 00:29:42.539 "impl_name": "uring", 00:29:42.539 "recv_buf_size": 2097152, 00:29:42.539 "send_buf_size": 2097152, 00:29:42.539 "enable_recv_pipe": true, 00:29:42.539 "enable_quickack": false, 00:29:42.539 "enable_placement_id": 0, 00:29:42.539 "enable_zerocopy_send_server": false, 00:29:42.539 "enable_zerocopy_send_client": false, 00:29:42.539 "zerocopy_threshold": 0, 00:29:42.539 "tls_version": 0, 00:29:42.539 "enable_ktls": false 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "sock_impl_set_options", 00:29:42.539 "params": { 00:29:42.539 "impl_name": "posix", 00:29:42.539 "recv_buf_size": 2097152, 00:29:42.539 "send_buf_size": 2097152, 00:29:42.539 "enable_recv_pipe": true, 00:29:42.539 "enable_quickack": false, 00:29:42.539 "enable_placement_id": 0, 00:29:42.539 "enable_zerocopy_send_server": true, 00:29:42.539 "enable_zerocopy_send_client": false, 00:29:42.539 "zerocopy_threshold": 0, 00:29:42.539 "tls_version": 0, 00:29:42.539 "enable_ktls": false 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "sock_impl_set_options", 00:29:42.539 "params": { 00:29:42.539 "impl_name": "ssl", 00:29:42.539 "recv_buf_size": 4096, 00:29:42.539 "send_buf_size": 4096, 00:29:42.539 "enable_recv_pipe": true, 00:29:42.539 "enable_quickack": false, 00:29:42.539 "enable_placement_id": 0, 00:29:42.539 "enable_zerocopy_send_server": true, 00:29:42.539 "enable_zerocopy_send_client": false, 00:29:42.539 "zerocopy_threshold": 0, 00:29:42.539 "tls_version": 0, 00:29:42.539 "enable_ktls": false 00:29:42.539 } 00:29:42.539 } 00:29:42.539 ] 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "subsystem": "vmd", 00:29:42.539 "config": [] 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "subsystem": "accel", 00:29:42.539 "config": [ 00:29:42.539 { 00:29:42.539 "method": "accel_set_options", 00:29:42.539 "params": { 00:29:42.539 "small_cache_size": 128, 00:29:42.539 "large_cache_size": 16, 00:29:42.539 "task_count": 2048, 00:29:42.539 "sequence_count": 2048, 00:29:42.539 "buf_count": 2048 00:29:42.539 } 00:29:42.539 } 00:29:42.539 ] 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "subsystem": "bdev", 00:29:42.539 "config": [ 00:29:42.539 { 00:29:42.539 "method": "bdev_set_options", 00:29:42.539 "params": { 00:29:42.539 "bdev_io_pool_size": 65535, 00:29:42.539 "bdev_io_cache_size": 256, 00:29:42.539 "bdev_auto_examine": true, 00:29:42.539 "iobuf_small_cache_size": 128, 00:29:42.539 "iobuf_large_cache_size": 16 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "bdev_raid_set_options", 00:29:42.539 "params": { 00:29:42.539 "process_window_size_kb": 1024 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "bdev_iscsi_set_options", 00:29:42.539 "params": { 00:29:42.539 "timeout_sec": 30 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "bdev_nvme_set_options", 00:29:42.539 "params": { 00:29:42.539 "action_on_timeout": "none", 00:29:42.539 "timeout_us": 0, 00:29:42.539 "timeout_admin_us": 0, 00:29:42.539 "keep_alive_timeout_ms": 10000, 00:29:42.539 "arbitration_burst": 0, 00:29:42.539 "low_priority_weight": 0, 00:29:42.539 "medium_priority_weight": 0, 00:29:42.539 "high_priority_weight": 0, 00:29:42.539 "nvme_adminq_poll_period_us": 10000, 00:29:42.539 "nvme_ioq_poll_period_us": 0, 00:29:42.539 "io_queue_requests": 512, 00:29:42.539 "delay_cmd_submit": true, 00:29:42.539 "transport_retry_count": 4, 00:29:42.539 "bdev_retry_count": 3, 00:29:42.539 "transport_ack_timeout": 0, 00:29:42.539 "ctrlr_loss_timeout_sec": 0, 00:29:42.539 "reconnect_delay_sec": 0, 00:29:42.539 "fast_io_fail_timeout_sec": 0, 00:29:42.539 "disable_auto_failback": false, 00:29:42.539 "generate_uuids": false, 00:29:42.539 "transport_tos": 0, 00:29:42.539 "nvme_error_stat": false, 00:29:42.539 "rdma_srq_size": 0, 00:29:42.539 "io_path_stat": false, 00:29:42.539 "allow_accel_sequence": false, 00:29:42.539 "rdma_max_cq_size": 0, 00:29:42.539 "rdma_cm_event_timeout_ms": 0, 00:29:42.539 "dhchap_digests": [ 00:29:42.539 "sha256", 00:29:42.539 "sha384", 00:29:42.539 "sha512" 00:29:42.539 ], 00:29:42.539 "dhchap_dhgroups": [ 00:29:42.539 "null", 00:29:42.539 "ffdhe2048", 00:29:42.539 "ffdhe3072", 00:29:42.539 "ffdhe4096", 00:29:42.539 "ffdhe6144", 00:29:42.539 "ffdhe8192" 00:29:42.539 ] 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "bdev_nvme_attach_controller", 00:29:42.539 "params": { 00:29:42.539 "name": "nvme0", 00:29:42.539 "trtype": "TCP", 00:29:42.539 "adrfam": "IPv4", 00:29:42.539 "traddr": "127.0.0.1", 00:29:42.539 "trsvcid": "4420", 00:29:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.539 "prchk_reftag": false, 00:29:42.539 "prchk_guard": false, 00:29:42.539 "ctrlr_loss_timeout_sec": 0, 00:29:42.539 "reconnect_delay_sec": 0, 00:29:42.539 "fast_io_fail_timeout_sec": 0, 00:29:42.539 "psk": "key0", 00:29:42.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.539 "hdgst": false, 00:29:42.539 "ddgst": false 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "bdev_nvme_set_hotplug", 00:29:42.539 "params": { 00:29:42.539 "period_us": 100000, 00:29:42.539 "enable": false 00:29:42.539 } 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "method": "bdev_wait_for_examine" 00:29:42.539 } 00:29:42.539 ] 00:29:42.539 }, 00:29:42.539 { 00:29:42.539 "subsystem": "nbd", 00:29:42.539 "config": [] 00:29:42.539 } 00:29:42.539 ] 00:29:42.539 }' 00:29:42.539 [2024-05-15 09:22:54.975565] Starting SPDK v24.05-pre git sha1 9526734a3 / DPDK 23.11.0 initialization... 00:29:42.539 [2024-05-15 09:22:54.975957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84556 ] 00:29:42.797 [2024-05-15 09:22:55.125900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.055 [2024-05-15 09:22:55.255917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.055 [2024-05-15 09:22:55.435538] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:43.621 09:22:56 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:43.621 09:22:56 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:29:43.621 09:22:56 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:43.621 09:22:56 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:43.621 09:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.879 09:22:56 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:43.879 09:22:56 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:43.879 09:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:43.879 09:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.879 09:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.138 09:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.138 09:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:44.396 09:22:56 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:44.396 09:22:56 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:44.396 09:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:44.396 09:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:44.396 09:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.396 09:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.396 09:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:44.654 09:22:56 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:44.654 09:22:56 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:44.654 09:22:56 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:44.654 09:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:44.912 09:22:57 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:44.912 09:22:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:44.912 09:22:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oApIAoT2p1 /tmp/tmp.vjAgxV7hC7 00:29:44.912 09:22:57 keyring_file -- keyring/file.sh@20 -- # killprocess 84556 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 84556 ']' 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@951 -- # kill -0 84556 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@952 -- # uname 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84556 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84556' 00:29:44.912 killing process with pid 84556 00:29:44.912 09:22:57 keyring_file -- common/autotest_common.sh@966 -- # kill 84556 00:29:44.912 Received shutdown signal, test time was about 1.000000 seconds 00:29:44.912 00:29:44.912 Latency(us) 00:29:44.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.913 =================================================================================================================== 00:29:44.913 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:44.913 09:22:57 keyring_file -- common/autotest_common.sh@971 -- # wait 84556 00:29:45.171 09:22:57 keyring_file -- keyring/file.sh@21 -- # killprocess 84277 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 84277 ']' 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@951 -- # kill -0 84277 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@952 -- # uname 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84277 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84277' 00:29:45.171 killing process with pid 84277 00:29:45.171 09:22:57 keyring_file -- common/autotest_common.sh@966 -- # kill 84277 00:29:45.171 [2024-05-15 09:22:57.420266] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:22:57 keyring_file -- common/autotest_common.sh@971 -- # wait 84277 00:29:45.171 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:45.171 [2024-05-15 09:22:57.420640] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:45.430 00:29:45.430 real 0m17.216s 00:29:45.430 user 0m42.485s 00:29:45.430 sys 0m3.621s 00:29:45.430 ************************************ 00:29:45.430 END TEST keyring_file 00:29:45.430 ************************************ 00:29:45.430 09:22:57 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:45.430 09:22:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:45.430 09:22:57 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:29:45.430 09:22:57 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:45.430 09:22:57 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:29:45.430 09:22:57 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:45.430 09:22:57 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:45.430 09:22:57 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:45.430 09:22:57 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:29:45.430 09:22:57 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:29:45.430 09:22:57 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:45.430 09:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:45.430 09:22:57 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:29:45.430 09:22:57 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:29:45.430 09:22:57 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:29:45.430 09:22:57 -- common/autotest_common.sh@10 -- # set +x 00:29:46.805 INFO: APP EXITING 00:29:46.805 INFO: killing all VMs 00:29:46.805 INFO: killing vhost app 00:29:46.805 INFO: EXIT DONE 00:29:47.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:47.628 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:47.628 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:48.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:48.195 Cleaning 00:29:48.195 Removing: /var/run/dpdk/spdk0/config 00:29:48.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:48.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:48.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:48.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:48.195 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:48.195 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:48.195 Removing: /var/run/dpdk/spdk1/config 00:29:48.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:48.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:48.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:48.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:48.195 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:48.195 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:48.195 Removing: /var/run/dpdk/spdk2/config 00:29:48.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:48.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:48.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:48.196 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:48.196 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:48.196 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:48.454 Removing: /var/run/dpdk/spdk3/config 00:29:48.454 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:48.454 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:48.454 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:48.454 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:48.454 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:48.454 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:48.454 Removing: /var/run/dpdk/spdk4/config 00:29:48.454 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:48.454 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:48.454 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:48.454 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:48.454 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:48.454 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:48.454 Removing: /dev/shm/nvmf_trace.0 00:29:48.454 Removing: /dev/shm/spdk_tgt_trace.pid58000 00:29:48.454 Removing: /var/run/dpdk/spdk0 00:29:48.454 Removing: /var/run/dpdk/spdk1 00:29:48.454 Removing: /var/run/dpdk/spdk2 00:29:48.454 Removing: /var/run/dpdk/spdk3 00:29:48.454 Removing: /var/run/dpdk/spdk4 00:29:48.454 Removing: /var/run/dpdk/spdk_pid57855 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58000 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58198 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58279 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58312 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58416 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58434 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58552 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58748 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58883 00:29:48.454 Removing: /var/run/dpdk/spdk_pid58953 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59029 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59115 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59186 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59225 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59260 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59322 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59421 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59854 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59906 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59957 00:29:48.454 Removing: /var/run/dpdk/spdk_pid59973 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60040 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60056 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60123 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60139 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60190 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60207 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60248 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60268 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60391 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60432 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60501 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60558 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60577 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60641 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60675 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60710 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60750 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60779 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60819 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60848 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60888 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60917 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60957 00:29:48.454 Removing: /var/run/dpdk/spdk_pid60986 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61026 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61055 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61094 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61124 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61163 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61199 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61236 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61274 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61308 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61344 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61414 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61501 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61804 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61827 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61859 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61878 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61888 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61907 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61926 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61947 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61966 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61988 00:29:48.454 Removing: /var/run/dpdk/spdk_pid61998 00:29:48.454 Removing: /var/run/dpdk/spdk_pid62017 00:29:48.454 Removing: /var/run/dpdk/spdk_pid62036 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62057 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62076 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62090 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62105 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62124 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62143 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62159 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62195 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62208 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62238 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62302 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62330 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62340 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62374 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62383 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62391 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62433 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62447 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62481 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62489 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62500 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62515 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62519 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62534 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62538 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62553 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62587 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62608 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62623 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62646 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62661 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62673 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62709 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62726 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62747 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62760 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62768 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62775 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62788 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62790 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62803 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62811 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62879 00:29:48.712 Removing: /var/run/dpdk/spdk_pid62932 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63037 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63070 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63110 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63130 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63152 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63166 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63198 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63219 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63289 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63305 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63349 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63398 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63450 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63473 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63565 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63613 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63645 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63864 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63961 00:29:48.712 Removing: /var/run/dpdk/spdk_pid63990 00:29:48.712 Removing: /var/run/dpdk/spdk_pid64308 00:29:48.712 Removing: /var/run/dpdk/spdk_pid64352 00:29:48.712 Removing: /var/run/dpdk/spdk_pid64645 00:29:48.712 Removing: /var/run/dpdk/spdk_pid65053 00:29:48.712 Removing: /var/run/dpdk/spdk_pid65322 00:29:48.712 Removing: /var/run/dpdk/spdk_pid66117 00:29:48.713 Removing: /var/run/dpdk/spdk_pid66940 00:29:48.713 Removing: /var/run/dpdk/spdk_pid67056 00:29:48.713 Removing: /var/run/dpdk/spdk_pid67124 00:29:48.713 Removing: /var/run/dpdk/spdk_pid68391 00:29:48.713 Removing: /var/run/dpdk/spdk_pid68600 00:29:48.713 Removing: /var/run/dpdk/spdk_pid71742 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72045 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72153 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72281 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72314 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72336 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72364 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72461 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72596 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72746 00:29:48.713 Removing: /var/run/dpdk/spdk_pid72821 00:29:48.713 Removing: /var/run/dpdk/spdk_pid73013 00:29:48.713 Removing: /var/run/dpdk/spdk_pid73092 00:29:48.713 Removing: /var/run/dpdk/spdk_pid73189 00:29:48.713 Removing: /var/run/dpdk/spdk_pid73489 00:29:48.713 Removing: /var/run/dpdk/spdk_pid73874 00:29:48.713 Removing: /var/run/dpdk/spdk_pid73880 00:29:48.713 Removing: /var/run/dpdk/spdk_pid74156 00:29:48.713 Removing: /var/run/dpdk/spdk_pid74171 00:29:48.713 Removing: /var/run/dpdk/spdk_pid74191 00:29:48.713 Removing: /var/run/dpdk/spdk_pid74227 00:29:48.972 Removing: /var/run/dpdk/spdk_pid74232 00:29:48.972 Removing: /var/run/dpdk/spdk_pid74521 00:29:48.972 Removing: /var/run/dpdk/spdk_pid74564 00:29:48.972 Removing: /var/run/dpdk/spdk_pid74849 00:29:48.972 Removing: /var/run/dpdk/spdk_pid75050 00:29:48.972 Removing: /var/run/dpdk/spdk_pid75438 00:29:48.972 Removing: /var/run/dpdk/spdk_pid75939 00:29:48.972 Removing: /var/run/dpdk/spdk_pid76755 00:29:48.972 Removing: /var/run/dpdk/spdk_pid77350 00:29:48.972 Removing: /var/run/dpdk/spdk_pid77352 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79230 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79296 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79356 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79412 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79536 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79592 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79658 00:29:48.972 Removing: /var/run/dpdk/spdk_pid79717 00:29:48.972 Removing: /var/run/dpdk/spdk_pid80032 00:29:48.972 Removing: /var/run/dpdk/spdk_pid81203 00:29:48.972 Removing: /var/run/dpdk/spdk_pid81335 00:29:48.972 Removing: /var/run/dpdk/spdk_pid81578 00:29:48.972 Removing: /var/run/dpdk/spdk_pid82131 00:29:48.972 Removing: /var/run/dpdk/spdk_pid82290 00:29:48.972 Removing: /var/run/dpdk/spdk_pid82447 00:29:48.972 Removing: /var/run/dpdk/spdk_pid82544 00:29:48.972 Removing: /var/run/dpdk/spdk_pid82693 00:29:48.972 Removing: /var/run/dpdk/spdk_pid82802 00:29:48.973 Removing: /var/run/dpdk/spdk_pid83463 00:29:48.973 Removing: /var/run/dpdk/spdk_pid83493 00:29:48.973 Removing: /var/run/dpdk/spdk_pid83528 00:29:48.973 Removing: /var/run/dpdk/spdk_pid83781 00:29:48.973 Removing: /var/run/dpdk/spdk_pid83815 00:29:48.973 Removing: /var/run/dpdk/spdk_pid83846 00:29:48.973 Removing: /var/run/dpdk/spdk_pid84277 00:29:48.973 Removing: /var/run/dpdk/spdk_pid84294 00:29:48.973 Removing: /var/run/dpdk/spdk_pid84556 00:29:48.973 Clean 00:29:48.973 09:23:01 -- common/autotest_common.sh@1448 -- # return 0 00:29:48.973 09:23:01 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:29:48.973 09:23:01 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:48.973 09:23:01 -- common/autotest_common.sh@10 -- # set +x 00:29:48.973 09:23:01 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:29:48.973 09:23:01 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:48.973 09:23:01 -- common/autotest_common.sh@10 -- # set +x 00:29:48.973 09:23:01 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:48.973 09:23:01 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:48.973 09:23:01 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:48.973 09:23:01 -- spdk/autotest.sh@387 -- # hash lcov 00:29:48.973 09:23:01 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:48.973 09:23:01 -- spdk/autotest.sh@389 -- # hostname 00:29:48.973 09:23:01 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1701806725-069-updated-1701632595 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:49.241 geninfo: WARNING: invalid characters removed from testname! 00:30:21.302 09:23:28 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:21.302 09:23:32 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:22.677 09:23:34 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:25.217 09:23:37 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:27.817 09:23:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:30.343 09:23:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:32.867 09:23:44 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:32.867 09:23:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:32.867 09:23:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:32.867 09:23:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.867 09:23:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.867 09:23:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.867 09:23:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.867 09:23:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.867 09:23:44 -- paths/export.sh@5 -- $ export PATH 00:30:32.867 09:23:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.867 09:23:44 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:32.867 09:23:44 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:32.867 09:23:44 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715765024.XXXXXX 00:30:32.867 09:23:44 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715765024.0DTGsM 00:30:32.867 09:23:44 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:32.867 09:23:44 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:30:32.867 09:23:44 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:32.867 09:23:44 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:32.867 09:23:44 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:32.867 09:23:44 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:32.867 09:23:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:32.867 09:23:44 -- common/autotest_common.sh@10 -- $ set +x 00:30:32.867 09:23:44 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:30:32.867 09:23:44 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:32.867 09:23:44 -- pm/common@17 -- $ local monitor 00:30:32.867 09:23:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:32.867 09:23:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:32.867 09:23:44 -- pm/common@21 -- $ date +%s 00:30:32.867 09:23:44 -- pm/common@25 -- $ sleep 1 00:30:32.867 09:23:44 -- pm/common@21 -- $ date +%s 00:30:32.867 09:23:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715765024 00:30:32.867 09:23:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715765024 00:30:32.867 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715765024_collect-vmstat.pm.log 00:30:32.867 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715765024_collect-cpu-load.pm.log 00:30:33.800 09:23:45 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:33.800 09:23:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:33.800 09:23:45 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:33.800 09:23:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:33.800 09:23:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:33.800 09:23:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:33.800 09:23:45 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:33.800 09:23:45 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:33.800 09:23:45 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:33.800 09:23:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:33.800 09:23:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:33.800 09:23:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:33.800 09:23:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:33.800 09:23:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:33.800 09:23:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:33.800 09:23:45 -- pm/common@44 -- $ pid=86103 00:30:33.800 09:23:45 -- pm/common@50 -- $ kill -TERM 86103 00:30:33.800 09:23:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:33.800 09:23:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:33.800 09:23:45 -- pm/common@44 -- $ pid=86105 00:30:33.800 09:23:45 -- pm/common@50 -- $ kill -TERM 86105 00:30:33.800 + [[ -n 5031 ]] 00:30:33.800 + sudo kill 5031 00:30:33.809 [Pipeline] } 00:30:33.828 [Pipeline] // timeout 00:30:33.834 [Pipeline] } 00:30:33.853 [Pipeline] // stage 00:30:33.859 [Pipeline] } 00:30:33.877 [Pipeline] // catchError 00:30:33.885 [Pipeline] stage 00:30:33.887 [Pipeline] { (Stop VM) 00:30:33.902 [Pipeline] sh 00:30:34.180 + vagrant halt 00:30:38.363 ==> default: Halting domain... 00:30:44.947 [Pipeline] sh 00:30:45.224 + vagrant destroy -f 00:30:49.467 ==> default: Removing domain... 00:30:49.480 [Pipeline] sh 00:30:49.757 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:49.766 [Pipeline] } 00:30:49.782 [Pipeline] // stage 00:30:49.787 [Pipeline] } 00:30:49.804 [Pipeline] // dir 00:30:49.809 [Pipeline] } 00:30:49.824 [Pipeline] // wrap 00:30:49.827 [Pipeline] } 00:30:49.841 [Pipeline] // catchError 00:30:49.850 [Pipeline] stage 00:30:49.852 [Pipeline] { (Epilogue) 00:30:49.865 [Pipeline] sh 00:30:50.142 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:56.707 [Pipeline] catchError 00:30:56.709 [Pipeline] { 00:30:56.726 [Pipeline] sh 00:30:57.004 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:57.269 Artifacts sizes are good 00:30:57.280 [Pipeline] } 00:30:57.295 [Pipeline] // catchError 00:30:57.305 [Pipeline] archiveArtifacts 00:30:57.311 Archiving artifacts 00:30:57.496 [Pipeline] cleanWs 00:30:57.505 [WS-CLEANUP] Deleting project workspace... 00:30:57.505 [WS-CLEANUP] Deferred wipeout is used... 00:30:57.511 [WS-CLEANUP] done 00:30:57.512 [Pipeline] } 00:30:57.526 [Pipeline] // stage 00:30:57.530 [Pipeline] } 00:30:57.542 [Pipeline] // node 00:30:57.547 [Pipeline] End of Pipeline 00:30:57.582 Finished: SUCCESS